ray.air.integrations.mlflow.MLflowLoggerCallback
ray.air.integrations.mlflow.MLflowLoggerCallback#
- class ray.air.integrations.mlflow.MLflowLoggerCallback(tracking_uri: Optional[str] = None, *, registry_uri: Optional[str] = None, experiment_name: Optional[str] = None, tags: Optional[Dict] = None, tracking_token: Optional[str] = None, save_artifact: bool = False)[source]#
Bases:
ray.tune.logger.logger.LoggerCallback
MLflow Logger to automatically log Tune results and config to MLflow.
MLflow (https://mlflow.org) Tracking is an open source library for recording and querying experiments. This Ray Tune
LoggerCallback
sends information (config parameters, training results & metrics, and artifacts) to MLflow for automatic experiment tracking.- Parameters
tracking_uri – The tracking URI for where to manage experiments and runs. This can either be a local file path or a remote server. This arg gets passed directly to mlflow initialization. When using Tune in a multi-node setting, make sure to set this to a remote server and not a local file path.
registry_uri – The registry URI that gets passed directly to mlflow initialization.
experiment_name – The experiment name to use for this Tune run. If the experiment with the name already exists with MLflow, it will be reused. If not, a new experiment will be created with that name.
tags – An optional dictionary of string keys and values to set as tags on the run
tracking_token – Tracking token used to authenticate with MLflow.
save_artifact – If set to True, automatically save the entire contents of the Tune local_dir as an artifact to the corresponding run in MlFlow.
Example:
from ray.air.integrations.mlflow import MLflowLoggerCallback tags = { "user_name" : "John", "git_commit_hash" : "abc123"} tune.run( train_fn, config={ # define search space here "parameter_1": tune.choice([1, 2, 3]), "parameter_2": tune.choice([4, 5, 6]), }, callbacks=[MLflowLoggerCallback( experiment_name="experiment1", tags=tags, save_artifact=True)])
- setup(*args, **kwargs)[source]#
Called once at the very beginning of training.
Any Callback setup should be added here (setting environment variables, etc.)
- Parameters
stop – Stopping criteria. If
time_budget_s
was passed toair.RunConfig
, aTimeoutStopper
will be passed here, either by itself or as a part of aCombinedStopper
.num_samples – Number of times to sample from the hyperparameter space. Defaults to 1. If
grid_search
is provided as an argument, the grid will be repeatednum_samples
of times. If this is -1, (virtually) infinite samples are generated until a stopping condition is met.total_num_samples – Total number of samples factoring in grid search samplers.
**info – Kwargs dict for forward compatibility.
- log_trial_start(trial: ray.tune.experiment.trial.Trial)[source]#
Handle logging when a trial starts.
- Parameters
trial – Trial object.
- log_trial_result(iteration: int, trial: ray.tune.experiment.trial.Trial, result: Dict)[source]#
Handle logging when a trial reports a result.
- Parameters
trial – Trial object.
result – Result dictionary.
- log_trial_end(trial: ray.tune.experiment.trial.Trial, failed: bool = False)[source]#
Handle logging when a trial ends.
- Parameters
trial – Trial object.
failed – True if the Trial finished gracefully, False if it failed (e.g. when it raised an exception).