ResultGrid (tune.ResultGrid)

class ray.tune.ResultGrid(experiment_analysis: ray.tune.analysis.experiment_analysis.ExperimentAnalysis)[source]

A set of Result objects for interacting with Ray Tune results.

You can use it to inspect the trials and obtain the best result.

The constructor is a private API. This object can only be created as a result of Tuner.fit().

Example

>>> import random
>>> from ray import air, tune
>>> def random_error_trainable(config):
...     if random.random() < 0.5:
...         return {"loss": 0.0}
...     else:
...         raise ValueError("This is an error")
>>> tuner = tune.Tuner(
...     random_error_trainable,
...     run_config=air.RunConfig(name="example-experiment"),
...     tune_config=tune.TuneConfig(num_samples=10),
... )
>>> result_grid = tuner.fit()  
>>> for i in range(len(result_grid)): 
...     result = result_grid[i]
...     if not result.error:
...             print(f"Trial finishes successfully with metrics"
...                f"{result.metrics}.")
...     else:
...             print(f"Trial failed with error {result.error}.")

You can also use result_grid for more advanced analysis.

>>> # Get the best result based on a particular metric.
>>> best_result = result_grid.get_best_result( 
...     metric="loss", mode="min")
>>> # Get the best checkpoint corresponding to the best result.
>>> best_checkpoint = best_result.checkpoint 
>>> # Get a dataframe for the last reported results of all of the trials
>>> df = result_grid.get_dataframe() 
>>> # Get a dataframe for the minimum loss seen for each trial
>>> df = result_grid.get_dataframe(metric="loss", mode="min") 

Note that trials of all statuses are included in the final result grid. If a trial is not in terminated state, its latest result and checkpoint as seen by Tune will be provided.

See Analyzing Tune Experiment Results for more usage examples.

PublicAPI (beta): This API is in beta and may change before becoming stable.

get_best_result(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = 'last', filter_nan_and_inf: bool = True) ray.air.result.Result[source]

Get the best result from all the trials run.

Parameters
  • metric – Key for trial info to order on. Defaults to the metric specified in your Tuner’s TuneConfig.

  • mode – One of [min, max]. Defaults to the mode specified in your Tuner’s TuneConfig.

  • scope – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

  • filter_nan_and_inf – If True (default), NaN or infinite values are disregarded and these trials are never selected as the best trial.

get_dataframe(filter_metric: Optional[str] = None, filter_mode: Optional[str] = None) pandas.core.frame.DataFrame[source]

Return dataframe of all trials with their configs and reported results.

Per default, this returns the last reported results for each trial.

If filter_metric and filter_mode are set, the results from each trial are filtered for this metric and mode. For example, if filter_metric="some_metric" and filter_mode="max", for each trial, every received result is checked, and the one where some_metric is maximal is returned.

Example

result_grid = Tuner.fit(...)

# Get last reported results per trial
df = result_grid.get_dataframe()

# Get best ever reported accuracy per trial
df = result_grid.get_dataframe(
    filter_metric="accuracy", filter_mode="max"
)
Parameters
  • filter_metric – Metric to filter best result for.

  • filter_mode – If filter_metric is given, one of ["min", "max"] to specify if we should find the minimum or maximum result.

Returns

Pandas DataFrame with each trial as a row and their results as columns.

property errors

Returns the exceptions of errored trials.

property num_errors

Returns the number of errored trials.

property num_terminated

Returns the number of terminated (but not errored) trials.

Result (air.Result)

class ray.air.Result(metrics: Optional[Dict[str, Any]], checkpoint: Optional[ray.air.checkpoint.Checkpoint], error: Optional[Exception], log_dir: Optional[pathlib.Path], metrics_dataframe: Optional[pd.DataFrame], best_checkpoints: Optional[List[Tuple[ray.air.checkpoint.Checkpoint, Dict[str, Any]]]])[source]

The final result of a ML training run or a Tune trial.

This is the class produced by Trainer.fit(). It contains a checkpoint, which can be used for resuming training and for creating a Predictor object. It also contains a metrics object describing training metrics. error is included so that non successful runs and trials can be represented as well.

The constructor is a private API.

Parameters
  • metrics – The final metrics as reported by an Trainable.

  • checkpoint – The final checkpoint of the Trainable.

  • error – The execution error of the Trainable run, if the trial finishes in error.

  • log_dir – Directory where the trial logs are saved.

  • metrics_dataframe – The full result dataframe of the Trainable. The dataframe is indexed by iterations and contains reported metrics.

  • best_checkpoints – A list of tuples of the best checkpoints saved by the Trainable and their associated metrics. The number of saved checkpoints is determined by the checkpoint_config argument of run_config (by default, all checkpoints will be saved).

PublicAPI (beta): This API is in beta and may change before becoming stable.

property config: Optional[Dict[str, Any]]

The config associated with the result.

ExperimentAnalysis (tune.ExperimentAnalysis)

class ray.tune.ExperimentAnalysis(experiment_checkpoint_path: str, trials: Optional[List[ray.tune.experiment.trial.Trial]] = None, default_metric: Optional[str] = None, default_mode: Optional[str] = None, sync_config: Optional[ray.tune.syncer.SyncConfig] = None)[source]

Analyze results from a Tune experiment.

To use this class, the experiment must be executed with the JsonLogger.

Parameters
  • experiment_checkpoint_path – Path to a json file or directory representing an experiment state, or a directory containing multiple experiment states (a run’s local_dir). Corresponds to Experiment.local_dir/Experiment.name/ experiment_state.json

  • trials – List of trials that can be accessed via analysis.trials.

  • default_metric – Default metric for comparing results. Can be overwritten with the metric parameter in the respective functions.

  • default_mode – Default mode for comparing results. Has to be one of [min, max]. Can be overwritten with the mode parameter in the respective functions.

Example

>>> from ray import tune
>>> tune.run( 
...     my_trainable, name="my_exp", local_dir="~/tune_results")
>>> analysis = ExperimentAnalysis( 
...     experiment_checkpoint_path="~/tune_results/my_exp/state.json")

PublicAPI (beta): This API is in beta and may change before becoming stable.

property best_trial: ray.tune.experiment.trial.Trial

Get the best trial of the experiment

The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run().

If you didn’t pass these parameters, use get_best_trial(metric, mode, scope) instead.

property best_config: Dict

Get the config of the best trial of the experiment

The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run().

If you didn’t pass these parameters, use get_best_config(metric, mode, scope) instead.

property best_checkpoint: ray.air.checkpoint.Checkpoint

Get the checkpoint path of the best trial of the experiment

The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run().

If you didn’t pass these parameters, use get_best_checkpoint(trial, metric, mode) instead.

Returns

Checkpoint object.

property best_logdir: str

Get the logdir of the best trial of the experiment

The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run().

If you didn’t pass these parameters, use get_best_logdir(metric, mode) instead.

property best_dataframe: pandas.core.frame.DataFrame

Get the full result dataframe of the best trial of the experiment

The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run().

If you didn’t pass these parameters, use get_best_logdir(metric, mode) and use it to look for the dataframe in the self.trial_dataframes dict.

property best_result: Dict

Get the last result of the best trial of the experiment

The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run().

If you didn’t pass these parameters, use get_best_trial(metric, mode, scope).last_result instead.

property best_result_df: pandas.core.frame.DataFrame

Get the best result of the experiment as a pandas dataframe.

The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run().

If you didn’t pass these parameters, use get_best_trial(metric, mode, scope).last_result instead.

property results: Dict[str, Dict]

Get the last result of the all trials of the experiment

property results_df: pandas.core.frame.DataFrame

Get all the last results as a pandas dataframe.

property trial_dataframes: Dict[str, pandas.core.frame.DataFrame]

List of all dataframes of the trials.

Each dataframe is indexed by iterations and contains reported metrics.

dataframe(metric: Optional[str] = None, mode: Optional[str] = None) pandas.core.frame.DataFrame[source]

Returns a pandas.DataFrame object constructed from the trials.

This function will look through all observed results of each trial and return the one corresponding to the passed metric and mode: If mode=min, it returns the result with the lowest ever observed metric for this trial (this is not necessarily the last)! For mode=max, it’s the highest, respectively. If metric=None or mode=None, the last result will be returned.

Parameters
  • metric – Key for trial info to order on. If None, uses last result.

  • mode – One of [None, “min”, “max”].

Returns

Constructed from a result dict of each trial.

Return type

pd.DataFrame

get_trial_checkpoints_paths(trial: ray.tune.experiment.trial.Trial, metric: Optional[str] = None) List[Tuple[str, numbers.Number]][source]

Gets paths and metrics of all persistent checkpoints of a trial.

Parameters
  • trial – The log directory of a trial, or a trial instance.

  • metric – key for trial info to return, e.g. “mean_accuracy”. “training_iteration” is used by default if no value was passed to self.default_metric.

Returns

List of [path, metric] for all persistent checkpoints of the trial.

get_best_checkpoint(trial: ray.tune.experiment.trial.Trial, metric: Optional[str] = None, mode: Optional[str] = None, return_path: bool = False) Optional[Union[ray.air.checkpoint.Checkpoint, str]][source]

Gets best persistent checkpoint path of provided trial.

Any checkpoints with an associated metric value of nan will be filtered out.

Parameters
  • trial – The log directory of a trial, or a trial instance.

  • metric – key of trial info to return, e.g. “mean_accuracy”. “training_iteration” is used by default if no value was passed to self.default_metric.

  • mode – One of [min, max]. Defaults to self.default_mode.

  • return_path – If True, only returns the path (and not the Checkpoint object). If using Ray client, it is not guaranteed that this path is available on the local (client) node. Can also contain a cloud URI.

Returns

Checkpoint object or string if return_path=True.

get_all_configs(prefix: bool = False) Dict[str, Dict][source]

Returns a list of all configurations.

Parameters

prefix – If True, flattens the config dict and prepends config/.

Returns

Dict of all configurations of trials, indexed by

their trial dir.

Return type

Dict[str, Dict]

get_best_trial(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = 'last', filter_nan_and_inf: bool = True) Optional[ray.tune.experiment.trial.Trial][source]

Retrieve the best trial object.

Compares all trials’ scores on metric. If metric is not specified, self.default_metric will be used. If mode is not specified, self.default_mode will be used. These values are usually initialized by passing the metric and mode parameters to tune.run().

Parameters
  • metric – Key for trial info to order on. Defaults to self.default_metric.

  • mode – One of [min, max]. Defaults to self.default_mode.

  • scope – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

  • filter_nan_and_inf – If True (default), NaN or infinite values are disregarded and these trials are never selected as the best trial.

Returns

The best trial for the provided metric. If no trials contain the provided

metric, or if the value for the metric is NaN for all trials, then returns None.

get_best_config(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = 'last') Optional[Dict][source]

Retrieve the best config corresponding to the trial.

Compares all trials’ scores on metric. If metric is not specified, self.default_metric will be used. If mode is not specified, self.default_mode will be used. These values are usually initialized by passing the metric and mode parameters to tune.run().

Parameters
  • metric – Key for trial info to order on. Defaults to self.default_metric.

  • mode – One of [min, max]. Defaults to self.default_mode.

  • scope – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

get_best_logdir(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = 'last') Optional[str][source]

Retrieve the logdir corresponding to the best trial.

Compares all trials’ scores on metric. If metric is not specified, self.default_metric will be used. If mode is not specified, self.default_mode will be used. These values are usually initialized by passing the metric and mode parameters to tune.run().

Parameters
  • metric – Key for trial info to order on. Defaults to self.default_metric.

  • mode – One of [min, max]. Defaults to self.default_mode.

  • scope – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

get_last_checkpoint(trial=None, metric='training_iteration', mode='max')[source]

Gets the last persistent checkpoint path of the provided trial, i.e., with the highest “training_iteration”.

If no trial is specified, it loads the best trial according to the provided metric and mode (defaults to max. training iteration).

Parameters
  • trial – The log directory or an instance of a trial. If None, load the latest trial automatically.

  • metric – If no trial is specified, use this metric to identify the best trial and load the last checkpoint from this trial.

  • mode – If no trial is specified, use the metric and this mode to identify the best trial and load the last checkpoint from it.

Returns

Path for last checkpoint of trial

fetch_trial_dataframes() Dict[str, pandas.core.frame.DataFrame][source]

Fetches trial dataframes from files.

Returns

A dictionary containing “trial dir” to Dataframe.

stats() Dict[source]

Returns a dictionary of the statistics of the experiment.

If experiment_checkpoint_path pointed to a directory of experiments, the dict will be in the format of {experiment_session_id: stats}.

set_filetype(file_type: Optional[str] = None)[source]

Overrides the existing file type.

Parameters

file_type – Read results from json or csv files. Has to be one of [None, json, csv]. Defaults to csv.

runner_data() Dict[source]

Returns a dictionary of the TrialRunner data.

If experiment_checkpoint_path pointed to a directory of experiments, the dict will be in the format of {experiment_session_id: TrialRunner_data}.