ray.tune.search.basic_variant.BasicVariantGenerator#

class ray.tune.search.basic_variant.BasicVariantGenerator(points_to_evaluate: List[Dict] | None = None, max_concurrent: int = 0, constant_grid_search: bool = False, random_state: int | numpy.random.Generator | numpy.random.RandomState | None = None)[source]#

Bases: SearchAlgorithm

Uses Tune’s variant generation for resolving variables.

This is the default search algorithm used if no other search algorithm is specified.

Parameters:
  • points_to_evaluate – Initial parameter suggestions to be run first. This is for when you already have some good parameters you want to run first to help the algorithm make better suggestions for future parameters. Needs to be a list of dicts containing the configurations.

  • max_concurrent – Maximum number of concurrently running trials. If 0 (default), no maximum is enforced.

  • constant_grid_search – If this is set to True, Ray Tune will first try to sample random values and keep them constant over grid search parameters. If this is set to False (default), Ray Tune will sample new random parameters in each grid search condition.

  • random_state – Seed or numpy random generator to use for reproducible results. If None (default), will use the global numpy random generator (np.random). Please note that full reproducibility cannot be guaranteed in a distributed environment.

Example:

from ray import tune

# This will automatically use the `BasicVariantGenerator`
tuner = tune.Tuner(
    lambda config: config["a"] + config["b"],
    tune_config=tune.TuneConfig(
        num_samples=4
    ),
    param_space={
        "a": tune.grid_search([1, 2]),
        "b": tune.randint(0, 3)
    },
)
tuner.fit()

In the example above, 8 trials will be generated: For each sample (4), each of the grid search variants for a will be sampled once. The b parameter will be sampled randomly.

The generator accepts a pre-set list of points that should be evaluated. The points will replace the first samples of each experiment passed to the BasicVariantGenerator.

Each point will replace one sample of the specified num_samples. If grid search variables are overwritten with the values specified in the presets, the number of samples will thus be reduced.

Example:

from ray import tune
from ray.tune.search.basic_variant import BasicVariantGenerator

tuner = tune.Tuner(
    lambda config: config["a"] + config["b"],
    tune_config=tune.TuneConfig(
        search_alg=BasicVariantGenerator(points_to_evaluate=[
            {"a": 2, "b": 2},
            {"a": 1},
            {"b": 2}
        ]),
        num_samples=4
    ),
    param_space={
        "a": tune.grid_search([1, 2]),
        "b": tune.randint(0, 3)
    },
)
tuner.fit()

The example above will produce six trials via four samples:

  • The first sample will produce one trial with a=2 and b=2.

  • The second sample will produce one trial with a=1 and b sampled randomly

  • The third sample will produce two trials, one for each grid search value of a. It will be b=2 for both of these trials.

  • The fourth sample will produce two trials, one for each grid search value of a. b will be sampled randomly and independently for both of these trials.

Methods

add_configurations

Chains generator given experiment specifications.

has_checkpoint

Whether a checkpoint file exists within dirpath.

is_finished

Returns True if no trials left to be queued into TrialRunner.

next_trial

Provides one Trial object to be queued into the TrialRunner.

on_trial_result

Called on each intermediate result returned by a trial.

restore_from_dir

Restores self + searcher + search wrappers from dirpath.

set_finished

Marks the search algorithm as finished.

set_search_properties

Pass search properties to search algorithm.

Attributes

CKPT_FILE_TMPL

metric

total_samples

Get number of total trials to be generated