Search Algorithms (tune.suggest)

Repeater

class ray.tune.suggest.Repeater(searcher, repeat=1, set_index=True)[source]

A wrapper algorithm for repeating trials of same parameters.

Set tune.run(num_samples=…) to be a multiple of repeat. For example, set num_samples=15 if you intend to obtain 3 search algorithm suggestions and repeat each suggestion 5 times. Any leftover trials (num_samples mod repeat) will be ignored.

It is recommended that you do not run an early-stopping TrialScheduler simultaneously.

Parameters
  • searcher (Searcher) – Searcher object that the Repeater will optimize. Note that the Searcher will only see 1 trial among multiple repeated trials. The result/metric passed to the Searcher upon trial completion will be averaged among all repeats.

  • repeat (int) – Number of times to generate a trial with a repeated configuration. Defaults to 1.

  • set_index (bool) – Sets a tune.suggest.repeater.TRIAL_INDEX in Trainable/Function config which corresponds to the index of the repeated trial. This can be used for seeds. Defaults to True.

ConcurrencyLimiter

class ray.tune.suggest.ConcurrencyLimiter(searcher, max_concurrent)[source]

A wrapper algorithm for limiting the number of concurrent trials.

Parameters

searcher (Searcher) – Searcher object that the ConcurrencyLimiter will manage.

Example:

from ray.tune.suggest import ConcurrencyLimiter
search_alg = HyperOptSearch(metric="accuracy")
search_alg = ConcurrencyLimiter(search_alg, max_concurrent=2)
tune.run(trainable, search_alg=search_alg)

AxSearch

class ray.tune.suggest.ax.AxSearch(ax_client, mode='max', use_early_stopped_trials=None, max_concurrent=None)[source]

A wrapper around Ax to provide trial suggestions.

Requires Ax to be installed. Ax is an open source tool from Facebook for configuring and optimizing experiments. More information can be found in https://ax.dev/.

Parameters
  • parameters (list[dict]) – Parameters in the experiment search space. Required elements in the dictionaries are: “name” (name of this parameter, string), “type” (type of the parameter: “range”, “fixed”, or “choice”, string), “bounds” for range parameters (list of two values, lower bound first), “values” for choice parameters (list of values), and “value” for fixed parameters (single value).

  • objective_name (str) – Name of the metric used as objective in this experiment. This metric must be present in raw_data argument to log_data. This metric must also be present in the dict reported/returned by the Trainable.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. Defaults to “max”.

  • parameter_constraints (list[str]) – Parameter constraints, such as “x3 >= x4” or “x3 + x4 >= 2”.

  • outcome_constraints (list[str]) – Outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”

  • max_concurrent (int) – Deprecated.

  • use_early_stopped_trials – Deprecated.

from ax.service.ax_client import AxClient
from ray import tune
from ray.tune.suggest.ax import AxSearch

parameters = [
    {"name": "x1", "type": "range", "bounds": [0.0, 1.0]},
    {"name": "x2", "type": "range", "bounds": [0.0, 1.0]},
]

def easy_objective(config):
    for i in range(100):
        intermediate_result = config["x1"] + config["x2"] * i
        tune.track.log(score=intermediate_result)

client = AxClient(enforce_sequential_optimization=False)
client.create_experiment(parameters=parameters, objective_name="score")
algo = AxSearch(client)
tune.run(easy_objective, search_alg=algo)

BayesOptSearch

class ray.tune.suggest.bayesopt.BayesOptSearch(space, metric='episode_reward_mean', mode='max', utility_kwargs=None, random_state=1, verbose=0, max_concurrent=None, use_early_stopped_trials=None)[source]

A wrapper around BayesOpt to provide trial suggestions.

Requires BayesOpt to be installed. You can install BayesOpt with the command: pip install bayesian-optimization.

Parameters
  • space (dict) – Continuous search space. Parameters will be sampled from this space which will be used to run trials.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • utility_kwargs (dict) – Parameters to define the utility function. Must provide values for the keys kind, kappa, and xi.

  • random_state (int) – Used to initialize BayesOpt.

  • verbose (int) – Sets verbosity level for BayesOpt packages.

  • max_concurrent – Deprecated.

  • use_early_stopped_trials – Deprecated.

from ray import tune
from ray.tune.suggest.bayesopt import BayesOptSearch

space = {
    'width': (0, 20),
    'height': (-100, 100),
}
algo = BayesOptSearch(space, metric="mean_loss", mode="min")
tune.run(my_func, algo=algo)

TuneBOHB

class ray.tune.suggest.bohb.TuneBOHB(space, bohb_config=None, max_concurrent=10, metric='neg_mean_loss', mode='max')[source]

BOHB suggestion component.

Requires HpBandSter and ConfigSpace to be installed. You can install HpBandSter and ConfigSpace with: pip install hpbandster ConfigSpace.

This should be used in conjunction with HyperBandForBOHB.

Parameters
  • space (ConfigurationSpace) – Continuous ConfigSpace search space. Parameters will be sampled from this space which will be used to run trials.

  • bohb_config (dict) – configuration for HpBandSter BOHB algorithm

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

Example:

import ConfigSpace as CS

config_space = CS.ConfigurationSpace()
config_space.add_hyperparameter(
    CS.UniformFloatHyperparameter('width', lower=0, upper=20))
config_space.add_hyperparameter(
    CS.UniformFloatHyperparameter('height', lower=-100, upper=100))
config_space.add_hyperparameter(
    CS.CategoricalHyperparameter(
        name='activation', choices=['relu', 'tanh']))

algo = TuneBOHB(
    config_space, max_concurrent=4, metric='mean_loss', mode='min')
bohb = HyperBandForBOHB(
    time_attr='training_iteration',
    metric='mean_loss',
    mode='min',
    max_t=100)
run(MyTrainableClass, scheduler=bohb, search_alg=algo)

DragonflySearch

class ray.tune.suggest.dragonfly.DragonflySearch(optimizer, metric='episode_reward_mean', mode='max', points_to_evaluate=None, evaluated_rewards=None, **kwargs)[source]

A wrapper around Dragonfly to provide trial suggestions.

Requires Dragonfly to be installed via pip install dragonfly-opt.

Parameters
  • optimizer (dragonfly.opt.BlackboxOptimiser) – Optimizer provided from dragonfly. Choose an optimiser that extends BlackboxOptimiser.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • points_to_evaluate (list of lists) – A list of points you’d like to run first before sampling from the optimiser, e.g. these could be parameter configurations you already know work well to help the optimiser select good values. Each point is a list of the parameters using the order definition given by parameter_names.

  • evaluated_rewards (list) – If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate.

from ray import tune
from dragonfly.opt.gp_bandit import EuclideanGPBandit
from dragonfly.exd.experiment_caller import EuclideanFunctionCaller
from dragonfly import load_config

domain_vars = [{
    "name": "LiNO3_vol",
    "type": "float",
    "min": 0,
    "max": 7
}, {
    "name": "Li2SO4_vol",
    "type": "float",
    "min": 0,
    "max": 7
}, {
    "name": "NaClO4_vol",
    "type": "float",
    "min": 0,
    "max": 7
}]

domain_config = load_config({"domain": domain_vars})
func_caller = EuclideanFunctionCaller(None,
    domain_config.domain.list_of_domains[0])
optimizer = EuclideanGPBandit(func_caller, ask_tell_mode=True)

algo = DragonflySearch(optimizer, metric="objective", mode="max")

tune.run(my_func, algo=algo)

HyperOptSearch

class ray.tune.suggest.hyperopt.HyperOptSearch(space, metric='episode_reward_mean', mode='max', points_to_evaluate=None, n_initial_points=20, random_state_seed=None, gamma=0.25, max_concurrent=None, use_early_stopped_trials=None)[source]

A wrapper around HyperOpt to provide trial suggestions.

Requires HyperOpt to be installed from source. Uses the Tree-structured Parzen Estimators algorithm, although can be trivially extended to support any algorithm HyperOpt uses. Externally added trials will not be tracked by HyperOpt. Trials of the current run can be saved using save method, trials of a previous run can be loaded using restore method, thus enabling a warm start feature.

Parameters
  • space (dict) – HyperOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • points_to_evaluate (list) – Initial parameter suggestions to be run first. This is for when you already have some good parameters you want hyperopt to run first to help the TPE algorithm make better suggestions for future parameters. Needs to be a list of dict of hyperopt-named variables. Choice variables should be indicated by their index in the list (see example)

  • n_initial_points (int) – number of random evaluations of the objective function before starting to aproximate it with tree parzen estimators. Defaults to 20.

  • random_state_seed (int, array_like, None) – seed for reproducible results. Defaults to None.

  • gamma (float in range (0,1)) – parameter governing the tree parzen estimators suggestion algorithm. Defaults to 0.25.

  • max_concurrent – Deprecated.

  • use_early_stopped_trials – Deprecated.

space = {
    'width': hp.uniform('width', 0, 20),
    'height': hp.uniform('height', -100, 100),
    'activation': hp.choice("activation", ["relu", "tanh"])
}
current_best_params = [{
    'width': 10,
    'height': 0,
    'activation': 0, # The index of "relu"
}]
algo = HyperOptSearch(
    space, metric="mean_loss", mode="min",
    points_to_evaluate=current_best_params)

NevergradSearch

class ray.tune.suggest.nevergrad.NevergradSearch(optimizer, parameter_names, metric='episode_reward_mean', mode='max', max_concurrent=None, **kwargs)[source]

A wrapper around Nevergrad to provide trial suggestions.

Requires Nevergrad to be installed.

Nevergrad is an open source tool from Facebook for derivative free optimization of parameters and/or hyperparameters. It features a wide range of optimizers in a standard ask and tell interface. More information can be found at https://github.com/facebookresearch/nevergrad.

Parameters
  • optimizer (nevergrad.optimization.Optimizer) – Optimizer provided from Nevergrad.

  • parameter_names (list) – List of parameter names. Should match the dimension of the optimizer output. Alternatively, set to None if the optimizer is already instrumented with kwargs (see nevergrad v0.2.0+).

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • use_early_stopped_trials – Deprecated.

  • max_concurrent – Deprecated.

from nevergrad.optimization import optimizerlib

instrumentation = 1
optimizer = optimizerlib.OnePlusOne(instrumentation, budget=100)
algo = NevergradSearch(
    optimizer, ["lr"], metric="mean_loss", mode="min")

Note

In nevergrad v0.2.0+, optimizers can be instrumented. For instance, the following will specifies searching for “lr” from 1 to 2.

>>> from nevergrad.optimization import optimizerlib
>>> from nevergrad import instrumentation as inst
>>> lr = inst.var.Array(1).bounded(1, 2).asfloat()
>>> instrumentation = inst.Instrumentation(lr=lr)
>>> optimizer = optimizerlib.OnePlusOne(instrumentation, budget=100)
>>> algo = NevergradSearch(
        optimizer, None, metric="mean_loss", mode="min")

SigOptSearch

class ray.tune.suggest.sigopt.SigOptSearch(space, name='Default Tune Experiment', max_concurrent=1, reward_attr=None, metric='episode_reward_mean', mode='max', **kwargs)[source]

A wrapper around SigOpt to provide trial suggestions.

Requires SigOpt to be installed. Requires user to store their SigOpt API key locally as an environment variable at SIGOPT_KEY.

This module manages its own concurrency.

Parameters
  • space (list of dict) – SigOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.

  • name (str) – Name of experiment. Required by SigOpt.

  • max_concurrent (int) – Number of maximum concurrent trials supported based on the user’s SigOpt plan. Defaults to 1.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

Example:

space = [
    {
        'name': 'width',
        'type': 'int',
        'bounds': {
            'min': 0,
            'max': 20
        },
    },
    {
        'name': 'height',
        'type': 'int',
        'bounds': {
            'min': -100,
            'max': 100
        },
    },
]
algo = SigOptSearch(
    space, name="SigOpt Example Experiment",
    max_concurrent=1, metric="mean_loss", mode="min")

SkOptSearch

class ray.tune.suggest.skopt.SkOptSearch(optimizer, parameter_names, metric='episode_reward_mean', mode='max', points_to_evaluate=None, evaluated_rewards=None, max_concurrent=None, use_early_stopped_trials=None)[source]

A wrapper around skopt to provide trial suggestions.

Requires skopt to be installed.

Parameters
  • optimizer (skopt.optimizer.Optimizer) – Optimizer provided from skopt.

  • parameter_names (list) – List of parameter names. Should match the dimension of the optimizer output.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • points_to_evaluate (list of lists) – A list of points you’d like to run first before sampling from the optimiser, e.g. these could be parameter configurations you already know work well to help the optimiser select good values. Each point is a list of the parameters using the order definition given by parameter_names.

  • evaluated_rewards (list) – If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate. (See tune/examples/skopt_example.py)

  • max_concurrent – Deprecated.

  • use_early_stopped_trials – Deprecated.

Example

>>> from skopt import Optimizer
>>> optimizer = Optimizer([(0,20),(-100,100)])
>>> current_best_params = [[10, 0], [15, -20]]
>>> algo = SkOptSearch(optimizer,
>>>     ["width", "height"],
>>>     metric="mean_loss",
>>>     mode="min",
>>>     points_to_evaluate=current_best_params)

ZOOptSearch

class ray.tune.suggest.zoopt.ZOOptSearch(algo='asracos', budget=None, dim_dict=None, metric='episode_reward_mean', mode='min', **kwargs)[source]

A wrapper around ZOOpt to provide trial suggestions.

Requires zoopt package (>=0.4.0) to be installed. You can install it with the command: pip install -U zoopt.

Parameters
  • algo (str) – To specify an algorithm in zoopt you want to use. Only support ASRacos currently.

  • budget (int) – Number of samples.

  • dim_dict (dict) – Dimension dictionary. For continuous dimensions: (continuous, search_range, precision); For discrete dimensions: (discrete, search_range, has_order). More details can be found in zoopt package.

  • metric (str) – The training result objective value attribute. Defaults to “episode_reward_mean”.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. Defaults to “min”.

from ray.tune import run
from ray.tune.suggest.zoopt import ZOOptSearch
from zoopt import ValueType

dim_dict = {
    "height": (ValueType.CONTINUOUS, [-10, 10], 1e-2),
    "width": (ValueType.DISCRETE, [-10, 10], False)
}

config = {
    "num_samples": 200,
    "config": {
        "iterations": 10,  # evaluation times
    },
    "stop": {
        "timesteps_total": 10  # cumstom stop rules
    }
}

zoopt_search = ZOOptSearch(
    algo="Asracos",  # only support Asracos currently
    budget=config["num_samples"],
    dim_dict=dim_dict,
    metric="mean_loss",
    mode="min")

run(my_objective,
    search_alg=zoopt_search,
    name="zoopt_search",
    **config)

SearchAlgorithm

class ray.tune.suggest.SearchAlgorithm[source]

Interface of an event handler API for hyperparameter search.

Unlike TrialSchedulers, SearchAlgorithms will not have the ability to modify the execution (i.e., stop and pause trials).

Trials added manually (i.e., via the Client API) will also notify this class upon new events, so custom search algorithms should maintain a list of trials ID generated from this class.

See also: ray.tune.suggest.BasicVariantGenerator.

add_configurations(experiments)[source]

Tracks given experiment specifications.

Parameters

experiments (Experiment | list | dict) – Experiments to run.

next_trials()[source]

Provides Trial objects to be queued into the TrialRunner.

Returns

Returns a list of trials.

Return type

trials (list)

on_trial_result(trial_id, result)[source]

Called on each intermediate result returned by a trial.

This will only be called when the trial is in the RUNNING state.

Parameters

trial_id – Identifier for the trial.

on_trial_complete(trial_id, result=None, error=False)[source]

Notification for the completion of trial.

Parameters
  • trial_id – Identifier for the trial.

  • result (dict) – Defaults to None. A dict will be provided with this notification when the trial is in the RUNNING state AND either completes naturally or by manual termination.

  • error (bool) – Defaults to False. True if the trial is in the RUNNING state and errors.

is_finished()[source]

Returns True if no trials left to be queued into TrialRunner.

Can return True before all trials have finished executing.

set_finished()[source]

Marks the search algorithm as finished.

Searcher

class ray.tune.suggest.Searcher(metric='episode_reward_mean', mode='max', max_concurrent=None, use_early_stopped_trials=None)[source]

Bases: object

Abstract class for wrapping suggesting algorithms.

Custom algorithms can extend this class easily by overriding the suggest method provide generated parameters for the trials.

Any subclass that implements __init__ must also call the constructor of this class: super(Subclass, self).__init__(...).

To track suggestions and their corresponding evaluations, the method suggest will be passed a trial_id, which will be used in subsequent notifications.

Parameters
  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

class ExampleSearch(Searcher):
    def __init__(self, metric="mean_loss", mode="min", **kwargs):
        super(ExampleSearch, self).__init__(
            metric=metric, mode=mode, **kwargs)
        self.optimizer = Optimizer()
        self.configurations = {}

    def suggest(self, trial_id):
        configuration = self.optimizer.query()
        self.configurations[trial_id] = configuration

    def on_trial_complete(self, trial_id, result, **kwargs):
        configuration = self.configurations[trial_id]
        if result and self.metric in result:
            self.optimizer.update(configuration, result[self.metric])

tune.run(trainable_function, search_alg=ExampleSearch())
on_trial_result(trial_id, result)[source]

Optional notification for result during training.

Note that by default, the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process.

Parameters
  • trial_id (str) – A unique string ID for the trial.

  • result (dict) – Dictionary of metrics for current training progress. Note that the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process.

on_trial_complete(trial_id, result=None, error=False)[source]

Notification for the completion of trial.

Typically, this method is used for notifying the underlying optimizer of the result.

Parameters
  • trial_id (str) – A unique string ID for the trial.

  • result (dict) – Dictionary of metrics for current training progress. Note that the result dict may include NaNs or may not include the optimization metric. It is up to the subclass implementation to preprocess the result to avoid breaking the optimization process. Upon errors, this may also be None.

  • error (bool) – True if the training process raised an error.

suggest(trial_id)[source]

Queries the algorithm to retrieve the next set of parameters.

Parameters

trial_id (str) – Trial ID used for subsequent notifications.

Returns

Configuration for a trial, if possible.

Return type

dict|None

save(checkpoint_dir)[source]

Save function for this object.

restore(checkpoint_dir)[source]

Restore function for this object.

property metric

The training result objective value attribute.

property mode

Specifies if minimizing or maximizing the metric.