Search Algorithms (tune.suggest)¶
Tune’s Search Algorithms are wrappers around opensource optimization libraries for efficient hyperparameter selection. Each library has a specific way of defining the search space  please refer to their documentation for more details.
You can utilize these search algorithms as follows:
from ray.tune.suggest.hyperopt import HyperOptSearch
tune.run(my_function, search_alg=HyperOptSearch(...))
Summary¶
SearchAlgorithm 
Summary 
Website 
Code Example 

Bayesian/Bandit Optimization 
[Ax] 

Scalable Bayesian Optimization 

Bayesian Optimization 

TreeParzen Estimators 
[HyperOpt] 

Bayesian Optimization 

Bayesian Opt/HyperBand 
[BOHB] 

Gradientfree Optimization 

Zerothorder Optimization 
[ZOOpt] 

Closed source 
[SigOpt] 
Note
Unlike Tune’s Trial Schedulers, Tune SearchAlgorithms cannot affect or stop training processes. However, you can use them together to early stop the evaluation of bad trials.
Want to use your own algorithm? The interface is easy to implement. Read instructions here.
Tune also provides helpful utilities to use with Search Algorithms:
Repeated Evaluations (tune.suggest.Repeater): Support for running each sampled hyperparameter with multiple random seeds.
ConcurrencyLimiter (tune.suggest.ConcurrencyLimiter): Limits the amount of concurrent trials when running optimization.
Ax (tune.suggest.ax.AxSearch)¶

class
ray.tune.suggest.ax.
AxSearch
(ax_client, mode='max', use_early_stopped_trials=None, max_concurrent=None)[source]¶ Uses Ax to optimize hyperparameters.
Ax is a platform for understanding, managing, deploying, and automating adaptive experiments. Ax provides an easy to use interface with BoTorch, a flexible, modern library for Bayesian optimization in PyTorch. More information can be found in https://ax.dev/.
To use this search algorithm, you must install Ax and sqlalchemy:
$ pip install axplatform sqlalchemy
 Parameters
parameters (list[dict]) – Parameters in the experiment search space. Required elements in the dictionaries are: “name” (name of this parameter, string), “type” (type of the parameter: “range”, “fixed”, or “choice”, string), “bounds” for range parameters (list of two values, lower bound first), “values” for choice parameters (list of values), and “value” for fixed parameters (single value).
objective_name (str) – Name of the metric used as objective in this experiment. This metric must be present in raw_data argument to log_data. This metric must also be present in the dict reported/returned by the Trainable.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. Defaults to “max”.
parameter_constraints (list[str]) – Parameter constraints, such as “x3 >= x4” or “x3 + x4 >= 2”.
outcome_constraints (list[str]) – Outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”
max_concurrent (int) – Deprecated.
use_early_stopped_trials – Deprecated.
from ax.service.ax_client import AxClient from ray import tune from ray.tune.suggest.ax import AxSearch parameters = [ {"name": "x1", "type": "range", "bounds": [0.0, 1.0]}, {"name": "x2", "type": "range", "bounds": [0.0, 1.0]}, ] def easy_objective(config): for i in range(100): intermediate_result = config["x1"] + config["x2"] * i tune.track.log(score=intermediate_result) client = AxClient(enforce_sequential_optimization=False) client.create_experiment(parameters=parameters, objective_name="score") algo = AxSearch(client) tune.run(easy_objective, search_alg=algo)
Bayesian Optimization (tune.suggest.bayesopt.BayesOptSearch)¶

class
ray.tune.suggest.bayesopt.
BayesOptSearch
(space, metric, mode='max', utility_kwargs=None, random_state=42, random_search_steps=10, verbose=0, analysis=None, max_concurrent=None, use_early_stopped_trials=None)[source]¶ Uses fmfn/BayesianOptimization to optimize hyperparameters.
fmfn/BayesianOptimization is a library for Bayesian Optimization. More info can be found here: https://github.com/fmfn/BayesianOptimization.
You will need to install fmfn/BayesianOptimization via the following:
pip install bayesianoptimization
This algorithm requires setting a search space using the BayesianOptimization search space specification.
 Parameters
space (dict) – Continuous search space. Parameters will be sampled from this space which will be used to run trials.
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
utility_kwargs (dict) – Parameters to define the utility function. The default value is a dictionary with three keys:  kind: ucb (Upper Confidence Bound)  kappa: 2.576  xi: 0.0
random_state (int) – Used to initialize BayesOpt.
random_search_steps (int) – Number of initial random searches. This is necessary to avoid initial local overfitting of the Bayesian process.
analysis (ExperimentAnalysis) – Optionally, the previous analysis to integrate.
verbose (int) – Sets verbosity level for BayesOpt packages.
max_concurrent – Deprecated.
use_early_stopped_trials – Deprecated.
from ray import tune from ray.tune.suggest.bayesopt import BayesOptSearch space = { 'width': (0, 20), 'height': (100, 100), } algo = BayesOptSearch(space, metric="mean_loss", mode="min") tune.run(my_func, algo=algo)
BOHB (tune.suggest.bohb.TuneBOHB)¶
BOHB (Bayesian Optimization HyperBand) is an algorithm that both terminates bad trials and also uses Bayesian Optimization to improve the hyperparameter search. It is backed by the HpBandSter library.
Importantly, BOHB is intended to be paired with a specific scheduler class: HyperBandForBOHB.
This algorithm requires using the ConfigSpace search space specification. In order to use this search algorithm, you will need to install HpBandSter
and ConfigSpace
:
$ pip install hpbandster ConfigSpace
See the BOHB paper for more details.

class
ray.tune.suggest.bohb.
TuneBOHB
(space, bohb_config=None, max_concurrent=10, metric='neg_mean_loss', mode='max')[source]¶ BOHB suggestion component.
Requires HpBandSter and ConfigSpace to be installed. You can install HpBandSter and ConfigSpace with:
pip install hpbandster ConfigSpace
.This should be used in conjunction with HyperBandForBOHB.
 Parameters
space (ConfigurationSpace) – Continuous ConfigSpace search space. Parameters will be sampled from this space which will be used to run trials.
bohb_config (dict) – configuration for HpBandSter BOHB algorithm
max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
Example:
import ConfigSpace as CS config_space = CS.ConfigurationSpace() config_space.add_hyperparameter( CS.UniformFloatHyperparameter('width', lower=0, upper=20)) config_space.add_hyperparameter( CS.UniformFloatHyperparameter('height', lower=100, upper=100)) config_space.add_hyperparameter( CS.CategoricalHyperparameter( name='activation', choices=['relu', 'tanh'])) algo = TuneBOHB( config_space, max_concurrent=4, metric='mean_loss', mode='min') bohb = HyperBandForBOHB( time_attr='training_iteration', metric='mean_loss', mode='min', max_t=100) run(MyTrainableClass, scheduler=bohb, search_alg=algo)
Dragonfly (tune.suggest.dragonfly.DragonflySearch)¶

class
ray.tune.suggest.dragonfly.
DragonflySearch
(optimizer, metric='episode_reward_mean', mode='max', points_to_evaluate=None, evaluated_rewards=None, **kwargs)[source]¶ Uses Dragonfly to optimize hyperparameters.
Dragonfly provides an array of tools to scale up Bayesian optimisation to expensive large scale problems, including high dimensional optimisation. parallel evaluations in synchronous or asynchronous settings, multifidelity optimisation (using cheap approximations to speed up the optimisation process), and multiobjective optimisation. For more info:
Dragonfly Website: https://github.com/dragonfly/dragonfly
Dragonfly Documentation: https://dragonflyopt.readthedocs.io/
To use this search algorithm, install Dragonfly:
$ pip install dragonflyopt
This interface requires using FunctionCallers and optimizers provided by Dragonfly.
from ray import tune from dragonfly.opt.gp_bandit import EuclideanGPBandit from dragonfly.exd.experiment_caller import EuclideanFunctionCaller from dragonfly import load_config domain_vars = [{ "name": "LiNO3_vol", "type": "float", "min": 0, "max": 7 }, { "name": "Li2SO4_vol", "type": "float", "min": 0, "max": 7 }, { "name": "NaClO4_vol", "type": "float", "min": 0, "max": 7 }] domain_config = load_config({"domain": domain_vars}) func_caller = EuclideanFunctionCaller(None, domain_config.domain.list_of_domains[0]) optimizer = EuclideanGPBandit(func_caller, ask_tell_mode=True) algo = DragonflySearch(optimizer, metric="objective", mode="max") tune.run(my_func, algo=algo)
 Parameters
optimizer (dragonfly.opt.BlackboxOptimiser) – Optimizer provided from dragonfly. Choose an optimiser that extends BlackboxOptimiser.
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate (list of lists) – A list of points you’d like to run first before sampling from the optimiser, e.g. these could be parameter configurations you already know work well to help the optimiser select good values. Each point is a list of the parameters using the order definition given by parameter_names.
evaluated_rewards (list) – If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid rerunning those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to recompute the trial. Must be the same length as points_to_evaluate.
HyperOpt (tune.suggest.hyperopt.HyperOptSearch)¶

class
ray.tune.suggest.hyperopt.
HyperOptSearch
(space, metric='episode_reward_mean', mode='max', points_to_evaluate=None, n_initial_points=20, random_state_seed=None, gamma=0.25, max_concurrent=None, use_early_stopped_trials=None)[source]¶ A wrapper around HyperOpt to provide trial suggestions.
HyperOpt a Python library for serial and parallel optimization over awkward search spaces, which may include realvalued, discrete, and conditional dimensions. More info can be found at http://hyperopt.github.io/hyperopt.
HyperOptSearch uses the Treestructured Parzen Estimators algorithm, though it can be trivially extended to support any algorithm HyperOpt supports.
To use this search algorithm, you will need to install HyperOpt:
pip install U hyperopt
You will not be able to leverage Tune’s default
grid_search
and random search primitives when using HyperOptSearch. You need to use the HyperOpt search space specification.space = { 'width': hp.uniform('width', 0, 20), 'height': hp.uniform('height', 100, 100), 'activation': hp.choice("activation", ["relu", "tanh"]) } current_best_params = [{ 'width': 10, 'height': 0, 'activation': 0, # The index of "relu" }] algo = HyperOptSearch( space, metric="mean_loss", mode="min", points_to_evaluate=current_best_params)
 Parameters
space (dict) – HyperOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate (list) – Initial parameter suggestions to be run first. This is for when you already have some good parameters you want hyperopt to run first to help the TPE algorithm make better suggestions for future parameters. Needs to be a list of dict of hyperoptnamed variables. Choice variables should be indicated by their index in the list (see example)
n_initial_points (int) – number of random evaluations of the objective function before starting to aproximate it with tree parzen estimators. Defaults to 20.
random_state_seed (int, array_like, None) – seed for reproducible results. Defaults to None.
gamma (float in range (0,1)) – parameter governing the tree parzen estimators suggestion algorithm. Defaults to 0.25.
max_concurrent – Deprecated.
use_early_stopped_trials – Deprecated.
Nevergrad (tune.suggest.nevergrad.NevergradSearch)¶

class
ray.tune.suggest.nevergrad.
NevergradSearch
(optimizer, parameter_names, metric='episode_reward_mean', mode='max', max_concurrent=None, **kwargs)[source]¶ Uses Nevergrad to optimize hyperparameters.
Nevergrad is an open source tool from Facebook for derivative free optimization. More info can be found at: https://github.com/facebookresearch/nevergrad.
You will need to install Nevergrad via the following command:
$ pip install nevergrad
This algorithm requires using an optimizer provided by Nevergrad, of which there are many options. A good rundown can be found on the Nevergrad README’s Optimization section.
from nevergrad.optimization import optimizerlib instrumentation = 1 optimizer = optimizerlib.OnePlusOne(instrumentation, budget=100) algo = NevergradSearch( optimizer, ["lr"], metric="mean_loss", mode="min")
 Parameters
optimizer (nevergrad.optimization.Optimizer) – Optimizer provided from Nevergrad.
parameter_names (list) – List of parameter names. Should match the dimension of the optimizer output. Alternatively, set to None if the optimizer is already instrumented with kwargs (see nevergrad v0.2.0+).
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
use_early_stopped_trials – Deprecated.
max_concurrent – Deprecated.
Note
In nevergrad v0.2.0+, optimizers can be instrumented. For instance, the following will specifies searching for “lr” from 1 to 2.
>>> from nevergrad.optimization import optimizerlib >>> from nevergrad import instrumentation as inst >>> lr = inst.var.Array(1).bounded(1, 2).asfloat() >>> instrumentation = inst.Instrumentation(lr=lr) >>> optimizer = optimizerlib.OnePlusOne(instrumentation, budget=100) >>> algo = NevergradSearch( optimizer, None, metric="mean_loss", mode="min")
SigOpt (tune.suggest.sigopt.SigOptSearch)¶
You will need to use the SigOpt experiment and space specification to specify your search space.

class
ray.tune.suggest.sigopt.
SigOptSearch
(space, name='Default Tune Experiment', max_concurrent=1, reward_attr=None, metric='episode_reward_mean', mode='max', **kwargs)[source]¶ A wrapper around SigOpt to provide trial suggestions.
You must install SigOpt and have a SigOpt API key to use this module. Store the API token as an environment variable
SIGOPT_KEY
as follows:pip install U sigopt export SIGOPT_KEY= ...
You will need to use the SigOpt experiment and space specification.
This module manages its own concurrency.
 Parameters
space (list of dict) – SigOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.
name (str) – Name of experiment. Required by SigOpt.
max_concurrent (int) – Number of maximum concurrent trials supported based on the user’s SigOpt plan. Defaults to 1.
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
Example:
space = [ { 'name': 'width', 'type': 'int', 'bounds': { 'min': 0, 'max': 20 }, }, { 'name': 'height', 'type': 'int', 'bounds': { 'min': 100, 'max': 100 }, }, ] algo = SigOptSearch( space, name="SigOpt Example Experiment", max_concurrent=1, metric="mean_loss", mode="min")
ScikitOptimize (tune.suggest.skopt.SkOptSearch)¶

class
ray.tune.suggest.skopt.
SkOptSearch
(optimizer, parameter_names, metric='episode_reward_mean', mode='max', points_to_evaluate=None, evaluated_rewards=None, max_concurrent=None, use_early_stopped_trials=None)[source]¶ Uses Scikit Optimize (skopt) to optimize hyperparameters.
Scikitoptimize is a blackbox optimization library. Read more here: https://scikitoptimize.github.io.
You will need to install ScikitOptimize to use this module.
pip install scikitoptimize
This Search Algorithm requires you to pass in a skopt Optimizer object.
 Parameters
optimizer (skopt.optimizer.Optimizer) – Optimizer provided from skopt.
parameter_names (list) – List of parameter names. Should match the dimension of the optimizer output.
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
points_to_evaluate (list of lists) – A list of points you’d like to run first before sampling from the optimiser, e.g. these could be parameter configurations you already know work well to help the optimiser select good values. Each point is a list of the parameters using the order definition given by parameter_names.
evaluated_rewards (list) – If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid rerunning those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to recompute the trial. Must be the same length as points_to_evaluate. (See tune/examples/skopt_example.py)
max_concurrent – Deprecated.
use_early_stopped_trials – Deprecated.
Example:
from skopt import Optimizer optimizer = Optimizer([(0,20),(100,100)]) current_best_params = [[10, 0], [15, 20]] algo = SkOptSearch(optimizer, ["width", "height"], metric="mean_loss", mode="min", points_to_evaluate=current_best_params)
ZOOpt (tune.suggest.zoopt.ZOOptSearch)¶

class
ray.tune.suggest.zoopt.
ZOOptSearch
(algo='asracos', budget=None, dim_dict=None, metric='episode_reward_mean', mode='min', **kwargs)[source]¶ A wrapper around ZOOpt to provide trial suggestions.
ZOOptSearch is a library for derivativefree optimization. It is backed by the ZOOpt package. Currently, Asynchronous Sequential RAndomized COordinate Shrinking (ASRacos) is implemented in Tune.
To use ZOOptSearch, install zoopt (>=0.4.0):
pip install U zoopt
.from ray.tune import run from ray.tune.suggest.zoopt import ZOOptSearch from zoopt import ValueType dim_dict = { "height": (ValueType.CONTINUOUS, [10, 10], 1e2), "width": (ValueType.DISCRETE, [10, 10], False) } config = { "num_samples": 200, "config": { "iterations": 10, # evaluation times }, "stop": { "timesteps_total": 10 # cumstom stop rules } } zoopt_search = ZOOptSearch( algo="Asracos", # only support Asracos currently budget=config["num_samples"], dim_dict=dim_dict, metric="mean_loss", mode="min") run(my_objective, search_alg=zoopt_search, name="zoopt_search", **config)
 Parameters
algo (str) – To specify an algorithm in zoopt you want to use. Only support ASRacos currently.
budget (int) – Number of samples.
dim_dict (dict) – Dimension dictionary. For continuous dimensions: (continuous, search_range, precision); For discrete dimensions: (discrete, search_range, has_order). More details can be found in zoopt package.
metric (str) – The training result objective value attribute. Defaults to “episode_reward_mean”.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. Defaults to “min”.
Repeated Evaluations (tune.suggest.Repeater)¶
Use ray.tune.suggest.Repeater
to average over multiple evaluations of the same
hyperparameter configurations. This is useful in cases where the evaluated
training procedure has high variance (i.e., in reinforcement learning).
By default, Repeater
will take in a repeat
parameter and a search_alg
.
The search_alg
will suggest new configurations to try, and the Repeater
will run repeat
trials of the configuration. It will then average the
search_alg.metric
from the final results of each repeated trial.
Warning
It is recommended to not use Repeater
with a TrialScheduler.
Early termination can negatively affect the average reported metric.

class
ray.tune.suggest.
Repeater
(searcher, repeat=1, set_index=True)[source]¶ A wrapper algorithm for repeating trials of same parameters.
Set tune.run(num_samples=…) to be a multiple of repeat. For example, set num_samples=15 if you intend to obtain 3 search algorithm suggestions and repeat each suggestion 5 times. Any leftover trials (num_samples mod repeat) will be ignored.
It is recommended that you do not run an earlystopping TrialScheduler simultaneously.
 Parameters
searcher (Searcher) – Searcher object that the Repeater will optimize. Note that the Searcher will only see 1 trial among multiple repeated trials. The result/metric passed to the Searcher upon trial completion will be averaged among all repeats.
repeat (int) – Number of times to generate a trial with a repeated configuration. Defaults to 1.
set_index (bool) – Sets a tune.suggest.repeater.TRIAL_INDEX in Trainable/Function config which corresponds to the index of the repeated trial. This can be used for seeds. Defaults to True.
Example:
from ray.tune.suggest import Repeater search_alg = BayesOptSearch(...) re_search_alg = Repeater(search_alg, repeat=10) # Repeat 2 samples 10 times each. tune.run(trainable, num_samples=20, search_alg=re_search_alg)
ConcurrencyLimiter (tune.suggest.ConcurrencyLimiter)¶
Use ray.tune.suggest.ConcurrencyLimiter
to limit the amount of concurrency when using a search algorithm. This is useful when a given optimization algorithm does not parallelize very well (like a naive Bayesian Optimization).

class
ray.tune.suggest.
ConcurrencyLimiter
(searcher, max_concurrent)[source]¶ A wrapper algorithm for limiting the number of concurrent trials.
 Parameters
searcher (Searcher) – Searcher object that the ConcurrencyLimiter will manage.
Example:
from ray.tune.suggest import ConcurrencyLimiter search_alg = HyperOptSearch(metric="accuracy") search_alg = ConcurrencyLimiter(search_alg, max_concurrent=2) tune.run(trainable, search_alg=search_alg)
Implementing your own Search Algorithm¶
If you are interested in implementing or contributing a new Search Algorithm, provide the following interface:

class
ray.tune.suggest.
Searcher
(metric='episode_reward_mean', mode='max', max_concurrent=None, use_early_stopped_trials=None)[source]¶ Bases:
object
Abstract class for wrapping suggesting algorithms.
Custom algorithms can extend this class easily by overriding the suggest method provide generated parameters for the trials.
Any subclass that implements
__init__
must also call the constructor of this class:super(Subclass, self).__init__(...)
.To track suggestions and their corresponding evaluations, the method suggest will be passed a trial_id, which will be used in subsequent notifications.
 Parameters
metric (str) – The training result objective value attribute.
mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.
class ExampleSearch(Searcher): def __init__(self, metric="mean_loss", mode="min", **kwargs): super(ExampleSearch, self).__init__( metric=metric, mode=mode, **kwargs) self.optimizer = Optimizer() self.configurations = {} def suggest(self, trial_id): configuration = self.optimizer.query() self.configurations[trial_id] = configuration def on_trial_complete(self, trial_id, result, **kwargs): configuration = self.configurations[trial_id] if result and self.metric in result: self.optimizer.update(configuration, result[self.metric]) tune.run(trainable_function, search_alg=ExampleSearch())

on_trial_result
(trial_id, result)[source]¶ Optional notification for result during training.
Note that by default, the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process.
 Parameters
trial_id (str) – A unique string ID for the trial.
result (dict) – Dictionary of metrics for current training progress. Note that the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process.

on_trial_complete
(trial_id, result=None, error=False)[source]¶ Notification for the completion of trial.
Typically, this method is used for notifying the underlying optimizer of the result.
 Parameters
trial_id (str) – A unique string ID for the trial.
result (dict) – Dictionary of metrics for current training progress. Note that the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process. Upon errors, this may also be None.
error (bool) – True if the training process raised an error.

suggest
(trial_id)[source]¶ Queries the algorithm to retrieve the next set of parameters.
 Parameters
trial_id (str) – Trial ID used for subsequent notifications.
 Returns
Configuration for a trial, if possible.
 Return type
dictNone

property
metric
¶ The training result objective value attribute.

property
mode
¶ Specifies if minimizing or maximizing the metric.