Analysis/Logging (tune.analysis / tune.logger)

Analyzing Results

ExperimentAnalysis

class ray.tune.ExperimentAnalysis(experiment_checkpoint_path, trials=None)[source]

Bases: ray.tune.analysis.experiment_analysis.Analysis

Analyze results from a Tune experiment.

To use this class, the experiment must be executed with the JsonLogger.

Parameters
  • experiment_checkpoint_path (str) – Path to a json file representing an experiment state. Corresponds to Experiment.local_dir/Experiment.name/experiment_state.json

  • trials (list|None) – List of trials that can be accessed via analysis.trials.

Example

>>> tune.run(my_trainable, name="my_exp", local_dir="~/tune_results")
>>> analysis = ExperimentAnalysis(
>>>     experiment_checkpoint_path="~/tune_results/my_exp/state.json")
get_best_trial(metric, mode='max', scope='all')[source]

Retrieve the best trial object.

Compares all trials’ scores on metric.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

  • scope (str) – One of [all, last]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

get_best_config(metric, mode='max', scope='all')[source]

Retrieve the best config corresponding to the trial.

Compares all trials’ scores on metric.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

  • scope (str) – One of [all, last]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

get_best_logdir(metric, mode='max', scope='all')[source]

Retrieve the logdir corresponding to the best trial.

Compares all trials’ scores on metric.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

  • scope (str) – One of [all, last]. If scope=last, only look at each trial’s final step for metric, and compare across trials based on mode=[min,max]. If scope=all, find each trial’s min/max score for metric based on mode, and compare trials based on mode=[min,max].

stats()[source]

Returns a dictionary of the statistics of the experiment.

runner_data()[source]

Returns a dictionary of the TrialRunner data.

Analysis

class ray.tune.Analysis(experiment_dir)[source]

Analyze all results from a directory of experiments.

To use this class, the experiment must be executed with the JsonLogger.

dataframe(metric=None, mode=None)[source]

Returns a pandas.DataFrame object constructed from the trials.

Parameters
  • metric (str) – Key for trial info to order on. If None, uses last result.

  • mode (str) – One of [min, max].

Returns

Constructed from a result dict of each trial.

Return type

pd.DataFrame

get_best_config(metric, mode='max')[source]

Retrieve the best config corresponding to the trial.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

get_best_logdir(metric, mode='max')[source]

Retrieve the logdir corresponding to the best trial.

Parameters
  • metric (str) – Key for trial info to order on.

  • mode (str) – One of [min, max].

get_all_configs(prefix=False)[source]

Returns a list of all configurations.

Parameters

prefix (bool) – If True, flattens the config dict and prepends config/.

Returns

List of all configurations of trials,

Return type

List[dict]

get_trial_checkpoints_paths(trial, metric='training_iteration')[source]

Gets paths and metrics of all persistent checkpoints of a trial.

Parameters
  • trial (Trial) – The log directory of a trial, or a trial instance.

  • metric (str) – key for trial info to return, e.g. “mean_accuracy”. “training_iteration” is used by default.

Returns

List of [path, metric] for all persistent checkpoints of the trial.

property trial_dataframes

List of all dataframes of the trials.

Loggers (tune.logger)

Logger

class ray.tune.logger.Logger(config, logdir, trial=None)[source]

Logging interface for ray.tune.

By default, the UnifiedLogger implementation is used which logs results in multiple formats (TensorBoard, rllab/viskit, plain json, custom loggers) at once.

Parameters
  • config – Configuration passed to all logger creators.

  • logdir – Directory for all logger creators to log to.

  • trial (Trial) – Trial object for the logger to access.

UnifiedLogger

class ray.tune.logger.UnifiedLogger(config, logdir, trial=None, loggers=None, sync_function=None)[source]

Unified result logger for TensorBoard, rllab/viskit, plain json.

Parameters
  • config – Configuration passed to all logger creators.

  • logdir – Directory for all logger creators to log to.

  • loggers (list) – List of logger creators. Defaults to CSV, Tensorboard, and JSON loggers.

  • sync_function (func|str) – Optional function for syncer to run. See ray/python/ray/tune/syncer.py

TBXLogger

class ray.tune.logger.TBXLogger(config, logdir, trial=None)[source]

TensorBoardX Logger.

Note that hparams will be written only after a trial has terminated. This logger automatically flattens nested dicts to show on TensorBoard:

{“a”: {“b”: 1, “c”: 2}} -> {“a/b”: 1, “a/c”: 2}

JsonLogger

class ray.tune.logger.JsonLogger(config, logdir, trial=None)[source]

Logs trial results in json format.

Also writes to a results file and param.json file when results or configurations are updated. Experiments must be executed with the JsonLogger to be compatible with the ExperimentAnalysis tool.

CSVLogger

class ray.tune.logger.CSVLogger(config, logdir, trial=None)[source]

Logs results to progress.csv under the trial directory.

Automatically flattens nested dicts in the result dict before writing to csv:

{“a”: {“b”: 1, “c”: 2}} -> {“a/b”: 1, “a/c”: 2}

MLFLowLogger

class ray.tune.logger.MLFLowLogger(config, logdir, trial=None)[source]

MLFlow logger.

Requires the experiment configuration to have a MLFlow Experiment ID or manually set the proper environment variables.