{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "6df76a1f", "metadata": {}, "source": [ "# Using MLflow with Tune\n", "\n", "(tune-mlflow-ref)=\n", "\n", "[MLflow](https://mlflow.org/) is an open source platform to manage the ML lifecycle, including experimentation,\n", "reproducibility, deployment, and a central model registry. It currently offers four components, including\n", "MLflow Tracking to record and query experiments, including code, data, config, and results.\n", "\n", "```{image} /images/mlflow.png\n", ":align: center\n", ":alt: MLflow\n", ":height: 80px\n", ":target: https://www.mlflow.org/\n", "```\n", "\n", "Ray Tune currently offers two lightweight integrations for MLflow Tracking.\n", "One is the {ref}`MLflowLoggerCallback `, which automatically logs\n", "metrics reported to Tune to the MLflow Tracking API.\n", "\n", "The other one is the {ref}`setup_mlflow ` function, which can be\n", "used with the function API. It automatically\n", "initializes the MLflow API with Tune's training information and creates a run for each Tune trial.\n", "Then within your training function, you can just use the\n", "MLflow like you would normally do, e.g. using `mlflow.log_metrics()` or even `mlflow.autolog()`\n", "to log to your training process.\n", "\n", "```{contents}\n", ":backlinks: none\n", ":local: true\n", "```\n", "\n", "## Running an MLflow Example\n", "\n", "In the following example we're going to use both of the above methods, namely the `MLflowLoggerCallback` and\n", "the `setup_mlflow` function to log metrics.\n", "Let's start with a few crucial imports:" ] }, { "cell_type": "code", "execution_count": 1, "id": "b0e47339", "metadata": {}, "outputs": [], "source": [ "import os\n", "import tempfile\n", "import time\n", "\n", "import mlflow\n", "\n", "from ray import train, tune\n", "from ray.air.integrations.mlflow import MLflowLoggerCallback, setup_mlflow\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "618b6935", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Next, let's define an easy training function (a Tune `Trainable`) that iteratively computes steps and evaluates\n", "intermediate scores that we report to Tune." ] }, { "cell_type": "code", "execution_count": 2, "id": "f449538e", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def evaluation_fn(step, width, height):\n", " return (0.1 + width * step / 100) ** (-1) + height * 0.1\n", "\n", "\n", "def train_function(config):\n", " width, height = config[\"width\"], config[\"height\"]\n", "\n", " for step in range(config.get(\"steps\", 100)):\n", " # Iterative training function - can be any arbitrary training procedure\n", " intermediate_score = evaluation_fn(step, width, height)\n", " # Feed the score back to Tune.\n", " train.report({\"iterations\": step, \"mean_loss\": intermediate_score})\n", " time.sleep(0.1)\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "722e5d2f", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "Given an MLFlow tracking URI, you can now simply use the `MLflowLoggerCallback` as a `callback` argument to\n", "your `RunConfig()`:" ] }, { "cell_type": "code", "execution_count": 3, "id": "8e0b9ab7", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def tune_with_callback(mlflow_tracking_uri, finish_fast=False):\n", " tuner = tune.Tuner(\n", " train_function,\n", " tune_config=tune.TuneConfig(num_samples=5),\n", " run_config=train.RunConfig(\n", " name=\"mlflow\",\n", " callbacks=[\n", " MLflowLoggerCallback(\n", " tracking_uri=mlflow_tracking_uri,\n", " experiment_name=\"mlflow_callback_example\",\n", " save_artifact=True,\n", " )\n", " ],\n", " ),\n", " param_space={\n", " \"width\": tune.randint(10, 100),\n", " \"height\": tune.randint(0, 100),\n", " \"steps\": 5 if finish_fast else 100,\n", " },\n", " )\n", " results = tuner.fit()\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "e086f110", "metadata": {}, "source": [ "To use the `setup_mlflow` utility, you simply call this function in your training function.\n", "Note that we also use `mlflow.log_metrics(...)` to log metrics to MLflow.\n", "Otherwise, this version of our training function is identical to its original." ] }, { "cell_type": "code", "execution_count": 4, "id": "144b8f39", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def train_function_mlflow(config):\n", " tracking_uri = config.pop(\"tracking_uri\", None)\n", " setup_mlflow(\n", " config,\n", " experiment_name=\"setup_mlflow_example\",\n", " tracking_uri=tracking_uri,\n", " )\n", "\n", " # Hyperparameters\n", " width, height = config[\"width\"], config[\"height\"]\n", "\n", " for step in range(config.get(\"steps\", 100)):\n", " # Iterative training function - can be any arbitrary training procedure\n", " intermediate_score = evaluation_fn(step, width, height)\n", " # Log the metrics to mlflow\n", " mlflow.log_metrics(dict(mean_loss=intermediate_score), step=step)\n", " # Feed the score back to Tune.\n", " train.report({\"iterations\": step, \"mean_loss\": intermediate_score})\n", " time.sleep(0.1)\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "dc480366", "metadata": {}, "source": [ "With this new objective function ready, you can now create a Tune run with it as follows:" ] }, { "cell_type": "code", "execution_count": 5, "id": "4b9fe6be", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def tune_with_setup(mlflow_tracking_uri, finish_fast=False):\n", " # Set the experiment, or create a new one if does not exist yet.\n", " mlflow.set_tracking_uri(mlflow_tracking_uri)\n", " mlflow.set_experiment(experiment_name=\"setup_mlflow_example\")\n", "\n", " tuner = tune.Tuner(\n", " train_function_mlflow,\n", " tune_config=tune.TuneConfig(num_samples=5),\n", " run_config=train.RunConfig(\n", " name=\"mlflow\",\n", " ),\n", " param_space={\n", " \"width\": tune.randint(10, 100),\n", " \"height\": tune.randint(0, 100),\n", " \"steps\": 5 if finish_fast else 100,\n", " \"tracking_uri\": mlflow.get_tracking_uri(),\n", " },\n", " )\n", " results = tuner.fit()\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "915dfd30", "metadata": {}, "source": [ "If you hapen to have an MLFlow tracking URI, you can set it below in the `mlflow_tracking_uri` variable and set\n", "`smoke_test=False`.\n", "Otherwise, you can just run a quick test of the `tune_function` and `tune_decorated` functions without using MLflow." ] }, { "cell_type": "code", "execution_count": 6, "id": "05d11774", "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "2022-12-22 10:37:53,580\tINFO worker.py:1542 -- Started a local Ray instance. View the dashboard at \u001b[1m\u001b[32mhttp://127.0.0.1:8265 \u001b[39m\u001b[22m\n" ] }, { "data": { "text/html": [ "
\n", "
\n", "
\n", "

Tune Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "
Current time:2022-12-22 10:38:04
Running for: 00:00:06.73
Memory: 10.4/16.0 GiB
\n", "
\n", "
\n", "
\n", "

System Info

\n", " Using FIFO scheduling algorithm.
Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/4.03 GiB heap, 0.0/2.0 GiB objects\n", "
\n", " \n", "
\n", "
\n", "
\n", "

Trial Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name status loc height width loss iter total time (s) iterations neg_mean_loss
train_function_b275b_00000TERMINATED127.0.0.1:801 66 367.24935 5 0.587302 4 -7.24935
train_function_b275b_00001TERMINATED127.0.0.1:813 33 353.96667 5 0.507423 4 -3.96667
train_function_b275b_00002TERMINATED127.0.0.1:814 75 298.29365 5 0.518995 4 -8.29365
train_function_b275b_00003TERMINATED127.0.0.1:815 28 633.18168 5 0.567739 4 -3.18168
train_function_b275b_00004TERMINATED127.0.0.1:816 20 183.21951 5 0.526536 4 -3.21951
\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "

Trial Progress

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name date done episodes_total experiment_id experiment_tag hostname iterations iterations_since_restore mean_loss neg_mean_lossnode_ip pid time_since_restore time_this_iter_s time_total_s timestamp timesteps_since_restoretimesteps_total training_iterationtrial_id warmup_time
train_function_b275b_000002022-12-22_10-38-01True 28feaa4dd8ab4edab810e8109e77502e0_height=66,width=36kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 7.24935 -7.24935127.0.0.1 801 0.587302 0.126818 0.587302 1671705481 0 5b275b_00000 0.00293493
train_function_b275b_000012022-12-22_10-38-04True 245010d0c3d0439ebfb664764ae9db3c1_height=33,width=35kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 3.96667 -3.96667127.0.0.1 813 0.507423 0.122086 0.507423 1671705484 0 5b275b_00001 0.00553799
train_function_b275b_000022022-12-22_10-38-04True 898afbf9b906448c980f399c72a2324c2_height=75,width=29kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 8.29365 -8.29365127.0.0.1 814 0.518995 0.123554 0.518995 1671705484 0 5b275b_00002 0.0040431
train_function_b275b_000032022-12-22_10-38-04True 03a4476f82734642b6ab0a5040ca58f83_height=28,width=63kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 3.18168 -3.18168127.0.0.1 815 0.567739 0.125471 0.567739 1671705484 0 5b275b_00003 0.00406194
train_function_b275b_000042022-12-22_10-38-04True ff8c7c55ce6e404f9b0552c17f7a0c404_height=20,width=18kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 3.21951 -3.21951127.0.0.1 816 0.526536 0.123327 0.526536 1671705484 0 5b275b_00004 0.00332022
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "2022-12-22 10:38:04,477\tINFO tune.py:772 -- Total run time: 7.99 seconds (6.71 seconds for the tuning loop).\n" ] }, { "data": { "text/html": [ "
\n", "
\n", "
\n", "

Tune Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "
Current time:2022-12-22 10:38:11
Running for: 00:00:07.00
Memory: 10.7/16.0 GiB
\n", "
\n", "
\n", "
\n", "

System Info

\n", " Using FIFO scheduling algorithm.
Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/4.03 GiB heap, 0.0/2.0 GiB objects\n", "
\n", " \n", "
\n", "
\n", "
\n", "

Trial Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name status loc height width loss iter total time (s) iterations neg_mean_loss
train_function_mlflow_b73bd_00000TERMINATED127.0.0.1:842 37 684.05461 5 0.750435 4 -4.05461
train_function_mlflow_b73bd_00001TERMINATED127.0.0.1:853 50 206.11111 5 0.652748 4 -6.11111
train_function_mlflow_b73bd_00002TERMINATED127.0.0.1:854 38 834.0924 5 0.6513 4 -4.0924
train_function_mlflow_b73bd_00003TERMINATED127.0.0.1:855 15 931.76178 5 0.650586 4 -1.76178
train_function_mlflow_b73bd_00004TERMINATED127.0.0.1:856 75 438.04945 5 0.656046 4 -8.04945
\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
\n", "

Trial Progress

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name date done episodes_total experiment_id experiment_tag hostname iterations iterations_since_restore mean_loss neg_mean_lossnode_ip pid time_since_restore time_this_iter_s time_total_s timestamp timesteps_since_restoretimesteps_total training_iterationtrial_id warmup_time
train_function_mlflow_b73bd_000002022-12-22_10-38-08True 62703cfe82e54d74972377fbb525b0000_height=37,width=68kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 4.05461 -4.05461127.0.0.1 842 0.750435 0.108625 0.750435 1671705488 0 5b73bd_00000 0.0030272
train_function_mlflow_b73bd_000012022-12-22_10-38-11True 03ea89852115465392ed318db80216141_height=50,width=20kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 6.11111 -6.11111127.0.0.1 853 0.652748 0.110796 0.652748 1671705491 0 5b73bd_00001 0.00303078
train_function_mlflow_b73bd_000022022-12-22_10-38-11True 3731fc2966f9453ba58c650d89035ab42_height=38,width=83kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 4.0924 -4.0924 127.0.0.1 854 0.6513 0.108578 0.6513 1671705491 0 5b73bd_00002 0.00310016
train_function_mlflow_b73bd_000032022-12-22_10-38-11True fb35841742b348b9912d10203c730f1e3_height=15,width=93kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 1.76178 -1.76178127.0.0.1 855 0.650586 0.109097 0.650586 1671705491 0 5b73bd_00003 0.0576491
train_function_mlflow_b73bd_000042022-12-22_10-38-11True 6d3cbf9ecc3446369e607ff78c67bc294_height=75,width=43kais-macbook-pro.anyscale.com.beta.tailscale.net 4 5 8.04945 -8.04945127.0.0.1 856 0.656046 0.109869 0.656046 1671705491 0 5b73bd_00004 0.00265694
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "2022-12-22 10:38:11,514\tINFO tune.py:772 -- Total run time: 7.01 seconds (6.98 seconds for the tuning loop).\n" ] } ], "source": [ "smoke_test = True\n", "\n", "if smoke_test:\n", " mlflow_tracking_uri = os.path.join(tempfile.gettempdir(), \"mlruns\")\n", "else:\n", " mlflow_tracking_uri = \"\"\n", "\n", "tune_with_callback(mlflow_tracking_uri, finish_fast=smoke_test)\n", "if not smoke_test:\n", " df = mlflow.search_runs(\n", " [mlflow.get_experiment_by_name(\"mlflow_callback_example\").experiment_id]\n", " )\n", " print(df)\n", "\n", "tune_with_setup(mlflow_tracking_uri, finish_fast=smoke_test)\n", "if not smoke_test:\n", " df = mlflow.search_runs(\n", " [mlflow.get_experiment_by_name(\"setup_mlflow_example\").experiment_id]\n", " )\n", " print(df)\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f0df0817", "metadata": {}, "source": [ "This completes our Tune and MLflow walk-through.\n", "In the following sections you can find more details on the API of the Tune-MLflow integration.\n", "\n", "## MLflow AutoLogging\n", "\n", "You can also check out {doc}`here ` for an example on how you can\n", "leverage MLflow auto-logging, in this case with Pytorch Lightning\n", "\n", "## MLflow Logger API\n", "\n", "(tune-mlflow-logger)=\n", "\n", "```{eval-rst}\n", ".. autoclass:: ray.air.integrations.mlflow.MLflowLoggerCallback\n", " :noindex:\n", "```\n", "\n", "## MLflow setup API\n", "\n", "(tune-mlflow-setup)=\n", "\n", "```{eval-rst}\n", ".. autofunction:: ray.air.integrations.mlflow.setup_mlflow\n", " :noindex:\n", "```\n", "\n", "## More MLflow Examples\n", "\n", "- {doc}`/tune/examples/includes/mlflow_ptl_example`: Example for using [MLflow](https://github.com/mlflow/mlflow/)\n", " and [Pytorch Lightning](https://github.com/PyTorchLightning/pytorch-lightning) with Ray Tune." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" }, "orphan": true }, "nbformat": 4, "nbformat_minor": 5 }