Stopping mechanisms (tune.stopper)¶
In addition to Trial Schedulers like ASHA, where a number of trials are stopped if they perform subpar, Ray Tune also supports custom stopping mechanisms to stop trials early. For instance, stopping mechanisms can specify to stop trials when they reached a plateau and the metric doesn’t change anymore.
Ray Tune comes with several stopping mechanisms out of the box. For custom stopping behavior, you can
inherit from the
Other stopping behaviors are described in the user guide.
Base class for implementing a Tune experiment stopper.
Allows users to implement experiment-level stopping via
stop_all. By default, this class does not stop any trials. Subclasses need to implement
import time from ray import tune from ray.tune import Stopper class TimeStopper(Stopper): def __init__(self): self._start = time.time() self._deadline = 300 def __call__(self, trial_id, result): return False def stop_all(self): return time.time() - self._start > self.deadline tune.run(Trainable, num_samples=200, stop=TimeStopper())
Returns true if the trial should be terminated given the result.
Returns true if the experiment should be terminated.
Stop trials after reaching a maximum number of iterations
max_iter (int) – Number of iterations before stopping a trial.
ExperimentPlateauStopper(metric, std=0.001, top=10, mode='min', patience=0)¶
Early stop the experiment when a metric plateaued across trials.
Stops the entire experiment when the metric has plateaued for more than the given amount of iterations specified in the patience parameter.
metric (str) – The metric to be monitored.
std (float) – The minimal standard deviation after which the tuning process has to stop.
top (int) – The number of best models to consider.
mode (str) – The mode to select the top results. Can either be “min” or “max”.
patience (int) – Number of epochs to wait for a change in the top models.
ValueError – If the mode parameter is not “min” nor “max”.
ValueError – If the top parameter is not an integer greater than 1.
ValueError – If the standard deviation parameter is not a strictly positive float.
ValueError – If the patience parameter is not a strictly positive integer.
TrialPlateauStopper(metric: str, std: float = 0.01, num_results: int = 4, grace_period: int = 4, metric_threshold: Optional[float] = None, mode: Optional[str] = None)¶
Early stop single trials when they reached a plateau.
When the standard deviation of the metric result of a trial is below a threshold std, the trial plateaued and will be stopped early.
metric (str) – Metric to check for convergence.
std (float) – Maximum metric standard deviation to decide if a trial plateaued. Defaults to 0.01.
num_results (int) – Number of results to consider for stdev calculation.
grace_period (int) – Minimum number of timesteps before a trial can be early stopped
metric_threshold (Optional[float]) – Minimum or maximum value the result has to exceed before it can be stopped early.
mode (Optional[str]) – If a metric_threshold argument has been passed, this must be one of [min, max]. Specifies if we optimize for a large metric (max) or a small metric (min). If max, the metric_threshold has to be exceeded, if min the value has to be lower than metric_threshold in order to early stop.
Stops all trials after a certain timeout.
This stopper is automatically created when the time_budget_s argument is passed to tune.run().
timeout (int|float|datetime.timedelta) – Either a number specifying the timeout in seconds, or a datetime.timedelta object.