Loggers (tune.logger)

Tune has default loggers for Tensorboard, CSV, and JSON formats.

Logging Path

Tune will log the results of each trial to a subfolder under a specified local dir, which defaults to ~/ray_results.

# This logs to 2 different trial folders:
# ~/ray_results/trainable_name/trial_name_1 and ~/ray_results/trainable_name/trial_name_2
# trainable_name and trial_name are autogenerated.
tune.run(trainable, num_samples=2)

You can specify the local_dir and trainable_name:

# This logs to 2 different trial folders:
# ./results/test_experiment/trial_name_1 and ./results/test_experiment/trial_name_2
# Only trial_name is autogenerated.
tune.run(trainable, num_samples=2, local_dir="./results", name="test_experiment")

To specify custom trial folder names, you can pass use the trial_name_creator argument to tune.run. This takes a function with the following signature:

def trial_name_string(trial):
    """
    Args:
        trial (Trial): A generated trial object.

    Returns:
        trial_name (str): String representation of Trial.
    """
    return str(trial)

tune.run(
    MyTrainableClass,
    name="example-experiment",
    num_samples=1,
    trial_name_creator=trial_name_string
)

See the documentation on Trials: Trial.

Custom Loggers

You can pass in your own logging mechanisms to output logs in custom formats as follows:

from ray.tune.logger import DEFAULT_LOGGERS

tune.run(
    MyTrainableClass,
    name="experiment_name",
    loggers=DEFAULT_LOGGERS + (CustomLogger1, CustomLogger2)
)

These loggers will be called along with the default Tune loggers. All loggers must inherit the Logger interface (Logger). You can also check out logger.py for implementation details.

An example can be found in logging_example.py.

Viskit

Tune automatically integrates with Viskit via the CSVLogger outputs. To use VisKit (you may have to install some dependencies), run:

$ git clone https://github.com/rll/rllab.git
$ python rllab/rllab/viskit/frontend.py ~/ray_results/my_experiment

The nonrelevant metrics (like timing stats) can be disabled on the left to show only the relevant ones (like accuracy, loss, etc.).

../../_images/ray-tune-viskit.png

Logger

class ray.tune.logger.Logger(config, logdir, trial=None)[source]

Logging interface for ray.tune.

By default, the UnifiedLogger implementation is used which logs results in multiple formats (TensorBoard, rllab/viskit, plain json, custom loggers) at once.

Parameters
  • config – Configuration passed to all logger creators.

  • logdir – Directory for all logger creators to log to.

  • trial (Trial) – Trial object for the logger to access.

UnifiedLogger

class ray.tune.logger.UnifiedLogger(config, logdir, trial=None, loggers=None, sync_function=None)[source]

Unified result logger for TensorBoard, rllab/viskit, plain json.

Parameters
  • config – Configuration passed to all logger creators.

  • logdir – Directory for all logger creators to log to.

  • loggers (list) – List of logger creators. Defaults to CSV, Tensorboard, and JSON loggers.

  • sync_function (func|str) – Optional function for syncer to run. See ray/python/ray/tune/syncer.py

TBXLogger

class ray.tune.logger.TBXLogger(config, logdir, trial=None)[source]

TensorBoardX Logger.

Note that hparams will be written only after a trial has terminated. This logger automatically flattens nested dicts to show on TensorBoard:

{“a”: {“b”: 1, “c”: 2}} -> {“a/b”: 1, “a/c”: 2}

JsonLogger

class ray.tune.logger.JsonLogger(config, logdir, trial=None)[source]

Logs trial results in json format.

Also writes to a results file and param.json file when results or configurations are updated. Experiments must be executed with the JsonLogger to be compatible with the ExperimentAnalysis tool.

CSVLogger

class ray.tune.logger.CSVLogger(config, logdir, trial=None)[source]

Logs results to progress.csv under the trial directory.

Automatically flattens nested dicts in the result dict before writing to csv:

{“a”: {“b”: 1, “c”: 2}} -> {“a/b”: 1, “a/c”: 2}

MLFLowLogger

Tune also provides a default logger for MLFlow. You can install MLFlow via pip install mlflow. An example can be found mlflow_example.py. Note that this currently does not include artifact logging support. For this, you can use the native MLFlow APIs inside your Trainable definition.

class ray.tune.logger.MLFLowLogger(config, logdir, trial=None)[source]

MLFlow logger.

Requires the experiment configuration to have a MLFlow Experiment ID or manually set the proper environment variables.