Analysis (tune.analysis)

You can use the ExperimentAnalysis object for analyzing results. It is returned automatically when calling tune.run.

analysis = tune.run(
    trainable,
    name="example-experiment",
    num_samples=10,
)

Here are some example operations for obtaining a summary of your experiment:

# Get a dataframe for the last reported results of all of the trials
df = analysis.dataframe()

# Get a dataframe for the max accuracy seen for each trial
df = analysis.dataframe(metric="mean_accuracy", mode="max")

# Get a dict mapping {trial logdir -> dataframes} for all trials in the experiment.
all_dataframes = analysis.trial_dataframes

# Get a list of trials
trials = analysis.trials

You may want to get a summary of multiple experiments that point to the same local_dir. For this, you can use the Analysis class.

from ray.tune import Analysis
analysis = Analysis("~/ray_results/example-experiment")

ExperimentAnalysis (tune.ExperimentAnalysis)

class ray.tune.ExperimentAnalysis(experiment_checkpoint_path, trials=None)[source]

Bases: ray.tune.analysis.experiment_analysis.Analysis

Analyze results from a Tune experiment.

To use this class, the experiment must be executed with the JsonLogger.

Parameters
  • experiment_checkpoint_path (str) – Path to a json file representing an experiment state. Corresponds to Experiment.local_dir/Experiment.name/experiment_state.json

  • trials (list|None) – List of trials that can be accessed via analysis.trials.

Example

>>> tune.run(my_trainable, name="my_exp", local_dir="~/tune_results")
>>> analysis = ExperimentAnalysis(
>>>     experiment_checkpoint_path="~/tune_results/my_exp/state.json")
get_best_trial(metric, mode='max', scope='all')[source]

Retrieve the best trial object.

Compares all trials’ scores on metric.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

  • scope (str) – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

get_best_config(metric, mode='max', scope='all')[source]

Retrieve the best config corresponding to the trial.

Compares all trials’ scores on metric.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

  • scope (str) – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

get_best_logdir(metric, mode='max', scope='all')[source]

Retrieve the logdir corresponding to the best trial.

Compares all trials’ scores on metric.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

  • scope (str) – One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

stats()[source]

Returns a dictionary of the statistics of the experiment.

runner_data()[source]

Returns a dictionary of the TrialRunner data.

Analysis (tune.Analysis)

class ray.tune.Analysis(experiment_dir)[source]

Analyze all results from a directory of experiments.

To use this class, the experiment must be executed with the JsonLogger.

dataframe(metric=None, mode=None)[source]

Returns a pandas.DataFrame object constructed from the trials.

Parameters
  • metric (str) – Key for trial info to order on. If None, uses last result.

  • mode (str) – One of [min, max].

Returns

Constructed from a result dict of each trial.

Return type

pd.DataFrame

get_best_config(metric, mode='max')[source]

Retrieve the best config corresponding to the trial.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

get_best_logdir(metric, mode='max')[source]

Retrieve the logdir corresponding to the best trial.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

get_all_configs(prefix=False)[source]

Returns a list of all configurations.

Parameters

prefix (bool) – If True, flattens the config dict and prepends config/.

Returns

List of all configurations of trials,

Return type

List[dict]

get_trial_checkpoints_paths(trial, metric='training_iteration')[source]

Gets paths and metrics of all persistent checkpoints of a trial.

Parameters
  • trial (Trial) – The log directory of a trial, or a trial instance.

  • metric (str) – key for trial info to return, e.g. “mean_accuracy”. “training_iteration” is used by default.

Returns

List of [path, metric] for all persistent checkpoints of the trial.

property trial_dataframes

List of all dataframes of the trials.