{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "a587ce4e", "metadata": {}, "source": [ "# Running Tune experiments with Optuna\n", "\n", "\n", " \"try-anyscale-quickstart\"\n", "\n", "

\n", "\n", "In this tutorial we introduce Optuna, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with Optuna and, as a result, allow you to seamlessly scale up a Optuna optimization process - without sacrificing performance.\n", "\n", "Similar to Ray Tune, Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative (\"how\" over \"what\" emphasis), define-by-run style user API. With Optuna, a user has the ability to dynamically construct the search spaces for the hyperparameters. Optuna falls in the domain of \"derivative-free optimization\" and \"black-box optimization\".\n", "\n", "In this example we minimize a simple objective to briefly demonstrate the usage of Optuna with Ray Tune via `OptunaSearch`, including examples of conditional search spaces (string together relationships between hyperparameters), and the multi-objective problem (measure trade-offs among all important metrics). It's useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume `optuna>=3.0.0` library is installed. To learn more, please refer to [Optuna website](https://optuna.org/).\n", "\n", "Please note that sophisticated schedulers, such as `AsyncHyperBandScheduler`, may not work correctly with multi-objective optimization, since they typically expect a scalar score to compare fitness among trials.\n", "\n", "## Prerequisites" ] }, { "cell_type": "code", "execution_count": 1, "id": "aeaf9ff0", "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "# !pip install \"ray[tune]\"\n", "!pip install -q \"optuna>=3.0.0\"\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "467466a3", "metadata": {}, "source": [ "Next, import the necessary libraries:" ] }, { "cell_type": "code", "execution_count": 2, "id": "fb1ad624", "metadata": { "tags": [ "hide-output" ] }, "outputs": [], "source": [ "import time\n", "from typing import Dict, Optional, Any\n", "\n", "import ray\n", "from ray import tune\n", "from ray.tune.search import ConcurrencyLimiter\n", "from ray.tune.search.optuna import OptunaSearch" ] }, { "cell_type": "code", "execution_count": 3, "id": "e64f0b44", "metadata": { "tags": [ "hide-output" ] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b2da2a53d02f4c5b9a662b99548723f1", "version_major": 2, "version_minor": 0 }, "text/html": [ "
\n", "
\n", "
\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", "\n", "
Python version:3.10.16
Ray version:2.42.0
Dashboard:http://127.0.0.1:8265
\n", "\n", "
\n", "
\n" ], "text/plain": [ "RayContext(dashboard_url='127.0.0.1:8265', python_version='3.10.16', ray_version='2.42.0', ray_commit='637116a090c052d061af5ba3bef8a467c8c3fc25')" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ray.init(configure_logging=False) # initialize Ray" ] }, { "attachments": {}, "cell_type": "markdown", "id": "56b0c685", "metadata": {}, "source": [ "Let's start by defining a simple evaluation function.\n", "An explicit math formula is queried here for demonstration, yet in practice this is typically a black-box function-- e.g. the performance results after training an ML model.\n", "We artificially sleep for a bit (`0.1` seconds) to simulate a long-running ML experiment.\n", "This setup assumes that we're running multiple `step`s of an experiment while tuning three hyperparameters,\n", "namely `width`, `height`, and `activation`." ] }, { "cell_type": "code", "execution_count": 4, "id": "90a11f98", "metadata": {}, "outputs": [], "source": [ "def evaluate(step, width, height, activation):\n", " time.sleep(0.1)\n", " activation_boost = 10 if activation==\"relu\" else 0\n", " return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost" ] }, { "attachments": {}, "cell_type": "markdown", "id": "bc579b83", "metadata": {}, "source": [ "Next, our `objective` function to be optimized takes a Tune `config`, evaluates the `score` of your experiment in a training loop,\n", "and uses `tune.report` to report the `score` back to Tune." ] }, { "cell_type": "code", "execution_count": 5, "id": "3a11d0e0", "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "def objective(config):\n", " for step in range(config[\"steps\"]):\n", " score = evaluate(step, config[\"width\"], config[\"height\"], config[\"activation\"])\n", " tune.report({\"iterations\": step, \"mean_loss\": score})" ] }, { "attachments": {}, "cell_type": "markdown", "id": "c58bd20b", "metadata": {}, "source": [ "Next we define a search space. The critical assumption is that the optimal hyperparameters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.\n", "\n", "The simplest case is a search space with independent dimensions. In this case, a config dictionary will suffice." ] }, { "cell_type": "code", "execution_count": 6, "id": "c3e4eecb", "metadata": {}, "outputs": [], "source": [ "search_space = {\n", " \"steps\": 100,\n", " \"width\": tune.uniform(0, 20),\n", " \"height\": tune.uniform(-100, 100),\n", " \"activation\": tune.choice([\"relu\", \"tanh\"]),\n", "}" ] }, { "attachments": {}, "cell_type": "markdown", "id": "ef0c666d", "metadata": {}, "source": [ "Here we define the Optuna search algorithm:" ] }, { "cell_type": "code", "execution_count": 7, "id": "f23cadc8", "metadata": {}, "outputs": [], "source": [ "algo = OptunaSearch()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4287fa79", "metadata": {}, "source": [ "We also constrain the number of concurrent trials to `4` with a `ConcurrencyLimiter`." ] }, { "cell_type": "code", "execution_count": 8, "id": "68022ea4", "metadata": {}, "outputs": [], "source": [ "algo = ConcurrencyLimiter(algo, max_concurrent=4)\n" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4c250f74", "metadata": {}, "source": [ "The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to `1000` samples.\n", "(you can decrease this if it takes too long on your machine)." ] }, { "cell_type": "code", "execution_count": 9, "id": "f6c21314", "metadata": {}, "outputs": [], "source": [ "num_samples = 1000" ] }, { "cell_type": "code", "execution_count": 10, "id": "9533aabf", "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "# We override here for our smoke tests.\n", "num_samples = 10" ] }, { "attachments": {}, "cell_type": "markdown", "id": "92942b88", "metadata": {}, "source": [ "Finally, we run the experiment to `\"min\"`imize the \"mean_loss\" of the `objective` by searching `search_space` via `algo`, `num_samples` times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute `tuner.fit()`." ] }, { "cell_type": "code", "execution_count": 11, "id": "4e224bb2", "metadata": { "tags": [ "hide-output" ] }, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "
\n", "

Tune Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "
Current time:2025-02-10 18:06:12
Running for: 00:00:35.68
Memory: 22.7/36.0 GiB
\n", "
\n", "
\n", "
\n", "

System Info

\n", " Using FIFO scheduling algorithm.
Logical resource usage: 1.0/12 CPUs, 0/0 GPUs\n", "
\n", " \n", "
\n", "
\n", "
\n", "

Trial Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name status loc activation height width loss iter total time (s) iterations
objective_989a402cTERMINATED127.0.0.1:42307relu 6.57558 8.6631310.7728 100 10.3642 99
objective_d99d28c6TERMINATED127.0.0.1:42321tanh 51.2103 19.2804 5.17314 100 10.3775 99
objective_ce34b92bTERMINATED127.0.0.1:42323tanh -49.4554 17.2683 -4.88739 100 10.3741 99
objective_f650ea5fTERMINATED127.0.0.1:42332tanh 20.6147 3.19539 2.3679 100 10.3804 99
objective_e72e976eTERMINATED127.0.0.1:42356relu -12.5302 3.45152 9.03132 100 10.372 99
objective_d00b4e1aTERMINATED127.0.0.1:42362tanh 65.8592 3.14335 6.89726 100 10.3776 99
objective_30c6ec86TERMINATED127.0.0.1:42367tanh -82.0713 14.2595 -8.13679 100 10.3755 99
objective_691ce63cTERMINATED127.0.0.1:42368tanh 29.406 2.21881 3.37602 100 10.3653 99
objective_3051162cTERMINATED127.0.0.1:42404relu 61.1787 12.9673 16.1952 100 10.3885 99
objective_04a38992TERMINATED127.0.0.1:42405relu 6.2868811.4537 10.7161 100 10.4051 99
\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "tuner = tune.Tuner(\n", " objective,\n", " tune_config=tune.TuneConfig(\n", " metric=\"mean_loss\",\n", " mode=\"min\",\n", " search_alg=algo,\n", " num_samples=num_samples,\n", " ),\n", " param_space=search_space,\n", ")\n", "results = tuner.fit()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "b66aab6a", "metadata": {}, "source": [ "And now we have the hyperparameters found to minimize the mean loss." ] }, { "cell_type": "code", "execution_count": 12, "id": "e69db02e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best hyperparameters found were: {'steps': 100, 'width': 14.259467682064852, 'height': -82.07132174642958, 'activation': 'tanh'}\n" ] } ], "source": [ "print(\"Best hyperparameters found were: \", results.get_best_result().config)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "d545d30b", "metadata": {}, "source": [ "## Providing an initial set of hyperparameters\n", "\n", "While defining the search algorithm, we may choose to provide an initial set of hyperparameters that we believe are especially promising or informative, and\n", "pass this information as a helpful starting point for the `OptunaSearch` object." ] }, { "cell_type": "code", "execution_count": 13, "id": "7596b7f4", "metadata": {}, "outputs": [], "source": [ "initial_params = [\n", " {\"width\": 1, \"height\": 2, \"activation\": \"relu\"},\n", " {\"width\": 4, \"height\": 2, \"activation\": \"relu\"},\n", "]" ] }, { "attachments": {}, "cell_type": "markdown", "id": "f84bbff0", "metadata": {}, "source": [ "Now the `search_alg` built using `OptunaSearch` takes `points_to_evaluate`." ] }, { "cell_type": "code", "execution_count": 14, "id": "320d1935", "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "searcher = OptunaSearch(points_to_evaluate=initial_params)\n", "algo = ConcurrencyLimiter(searcher, max_concurrent=4)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "9147d9a2", "metadata": {}, "source": [ "And run the experiment with initial hyperparameter evaluations:" ] }, { "cell_type": "code", "execution_count": 15, "id": "ee442efd", "metadata": { "tags": [ "hide-output" ] }, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "
\n", "

Tune Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "
Current time:2025-02-10 18:06:47
Running for: 00:00:35.44
Memory: 22.7/36.0 GiB
\n", "
\n", "
\n", "
\n", "

System Info

\n", " Using FIFO scheduling algorithm.
Logical resource usage: 1.0/12 CPUs, 0/0 GPUs\n", "
\n", " \n", "
\n", "
\n", "
\n", "

Trial Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name status loc activation height width loss iter total time (s) iterations
objective_1d2e715fTERMINATED127.0.0.1:42435relu 2 1 11.1174 100 10.3556 99
objective_f7c2aed0TERMINATED127.0.0.1:42436relu 2 4 10.4463 100 10.3702 99
objective_09dcce33TERMINATED127.0.0.1:42438tanh 28.5547 17.4195 2.91312 100 10.3483 99
objective_b9955517TERMINATED127.0.0.1:42443tanh -73.0995 13.8859 -7.23773 100 10.3682 99
objective_d81ebd5cTERMINATED127.0.0.1:42464relu -1.86597 1.4609310.4601 100 10.3969 99
objective_3f0030e7TERMINATED127.0.0.1:42465relu 38.7166 1.3696 14.5585 100 10.3741 99
objective_86bf6402TERMINATED127.0.0.1:42470tanh 40.269 5.13015 4.21999 100 10.3769 99
objective_75d06a83TERMINATED127.0.0.1:42471tanh -11.2824 3.10251-0.812933 100 10.3695 99
objective_0d197811TERMINATED127.0.0.1:42496tanh 91.7076 15.1032 9.2372 100 10.3631 99
objective_5156451fTERMINATED127.0.0.1:42497tanh 58.9282 3.96315 6.14136 100 10.4732 99
\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "tuner = tune.Tuner(\n", " objective,\n", " tune_config=tune.TuneConfig(\n", " metric=\"mean_loss\",\n", " mode=\"min\",\n", " search_alg=algo,\n", " num_samples=num_samples,\n", " ),\n", " param_space=search_space,\n", ")\n", "results = tuner.fit()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "ccfe15e2", "metadata": {}, "source": [ "We take another look at the optimal hyperparameters." ] }, { "cell_type": "code", "execution_count": 16, "id": "fcfa0c2e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best hyperparameters found were: {'steps': 100, 'width': 13.885889617119432, 'height': -73.09947583621019, 'activation': 'tanh'}\n" ] } ], "source": [ "print(\"Best hyperparameters found were: \", results.get_best_result().config)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "88080576", "metadata": {}, "source": [ "## Conditional search spaces \n", "\n", "Sometimes we may want to build a more complicated search space that has conditional dependencies on other hyperparameters. In this case, we pass a define-by-run function to the `search_alg` argument in `ray.tune()`." ] }, { "cell_type": "code", "execution_count": 17, "id": "f0acc2fc", "metadata": { "lines_to_next_cell": 0 }, "outputs": [], "source": [ "def define_by_run_func(trial) -> Optional[Dict[str, Any]]:\n", " \"\"\"Define-by-run function to construct a conditional search space.\n", "\n", " Ensure no actual computation takes place here. That should go into\n", " the trainable passed to ``Tuner()`` (in this example, that's\n", " ``objective``).\n", "\n", " For more information, see https://optuna.readthedocs.io/en/stable\\\n", " /tutorial/10_key_features/002_configurations.html\n", "\n", " Args:\n", " trial: Optuna Trial object\n", " \n", " Returns:\n", " Dict containing constant parameters or None\n", " \"\"\"\n", "\n", " activation = trial.suggest_categorical(\"activation\", [\"relu\", \"tanh\"])\n", "\n", " # Define-by-run allows for conditional search spaces.\n", " if activation == \"relu\":\n", " trial.suggest_float(\"width\", 0, 20)\n", " trial.suggest_float(\"height\", -100, 100)\n", " else:\n", " trial.suggest_float(\"width\", -1, 21)\n", " trial.suggest_float(\"height\", -101, 101)\n", " \n", " # Return all constants in a dictionary.\n", " return {\"steps\": 100}" ] }, { "attachments": {}, "cell_type": "markdown", "id": "4c9d0945", "metadata": {}, "source": [ "As before, we create the `search_alg` from `OptunaSearch` and `ConcurrencyLimiter`, this time we define the scope of search via the `space` argument and provide no initialization. We also must specific metric and mode when using `space`. " ] }, { "cell_type": "code", "execution_count": 18, "id": "906f9ffc", "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "[I 2025-02-10 18:06:47,670] A new study created in memory with name: optuna\n" ] } ], "source": [ "searcher = OptunaSearch(space=define_by_run_func, metric=\"mean_loss\", mode=\"min\")\n", "algo = ConcurrencyLimiter(searcher, max_concurrent=4)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "fea9399c", "metadata": {}, "source": [ "Running the experiment with a define-by-run search space:" ] }, { "cell_type": "code", "execution_count": 19, "id": "bf0ee932", "metadata": { "tags": [ "hide-output" ] }, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "
\n", "

Tune Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "
Current time:2025-02-10 18:07:23
Running for: 00:00:35.58
Memory: 22.9/36.0 GiB
\n", "
\n", "
\n", "
\n", "

System Info

\n", " Using FIFO scheduling algorithm.
Logical resource usage: 1.0/12 CPUs, 0/0 GPUs\n", "
\n", " \n", "
\n", "
\n", "
\n", "

Trial Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name status loc activation height steps width loss iter total time (s) iterations
objective_48aa8fedTERMINATED127.0.0.1:42529relu -76.595 100 9.90896 2.44141 100 10.3957 99
objective_5f395194TERMINATED127.0.0.1:42531relu -34.1447 10012.9999 6.66263 100 10.3823 99
objective_e64a7441TERMINATED127.0.0.1:42532relu -50.3172 100 3.95399 5.21738 100 10.3839 99
objective_8e668790TERMINATED127.0.0.1:42537tanh 30.9768 10016.22 3.15957 100 10.3818 99
objective_78ca576bTERMINATED127.0.0.1:42559relu 80.5037 100 0.90613919.0533 100 10.3731 99
objective_4cd9e37aTERMINATED127.0.0.1:42560relu 77.0988 100 8.43807 17.8282 100 10.3881 99
objective_a40498d5TERMINATED127.0.0.1:42565tanh -24.0393 10012.7274 -2.32519 100 10.4031 99
objective_43e7ea7eTERMINATED127.0.0.1:42566tanh -92.349 10015.8595 -9.17161 100 10.4602 99
objective_cb92227eTERMINATED127.0.0.1:42591relu 3.58988 10017.3259 10.417 100 10.3817 99
objective_abed5125TERMINATED127.0.0.1:42608tanh 86.0127 10011.2746 8.69007 100 10.3995 99
\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "tuner = tune.Tuner(\n", " objective,\n", " tune_config=tune.TuneConfig(\n", " search_alg=algo,\n", " num_samples=num_samples,\n", " ),\n", ")\n", "results = tuner.fit()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "11e1ee04", "metadata": {}, "source": [ "We take a look again at the optimal hyperparameters." ] }, { "cell_type": "code", "execution_count": 20, "id": "13e4ce18", "metadata": { "lines_to_next_cell": 0 }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best hyperparameters for loss found were: {'activation': 'tanh', 'width': 15.859495323836288, 'height': -92.34898015005697, 'steps': 100}\n" ] } ], "source": [ "print(\"Best hyperparameters for loss found were: \", results.get_best_result(\"mean_loss\", \"min\").config)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "34bbd066", "metadata": {}, "source": [ "## Multi-objective optimization\n", "\n", "Finally, let's take a look at the multi-objective case. This permits us to optimize multiple metrics at once, and organize our results based on the different objectives." ] }, { "cell_type": "code", "execution_count": 21, "id": "b233cbea", "metadata": {}, "outputs": [], "source": [ "def multi_objective(config):\n", " # Hyperparameters\n", " width, height = config[\"width\"], config[\"height\"]\n", "\n", " for step in range(config[\"steps\"]):\n", " # Iterative training function - can be any arbitrary training procedure\n", " intermediate_score = evaluate(step, config[\"width\"], config[\"height\"], config[\"activation\"])\n", " # Feed the score back back to Tune.\n", " tune.report({\n", " \"iterations\": step, \"loss\": intermediate_score, \"gain\": intermediate_score * width\n", " })" ] }, { "attachments": {}, "cell_type": "markdown", "id": "338e5108", "metadata": {}, "source": [ "We define the `OptunaSearch` object this time with metric and mode as list arguments." ] }, { "cell_type": "code", "execution_count": 22, "id": "624d0bc8", "metadata": { "tags": [ "hide-output" ] }, "outputs": [ { "data": { "text/html": [ "
\n", "
\n", "
\n", "

Tune Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "
Current time:2025-02-10 18:07:58
Running for: 00:00:35.27
Memory: 22.7/36.0 GiB
\n", "
\n", "
\n", "
\n", "

System Info

\n", " Using FIFO scheduling algorithm.
Logical resource usage: 1.0/12 CPUs, 0/0 GPUs\n", "
\n", " \n", "
\n", "
\n", "
\n", "

Trial Status

\n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "
Trial name status loc activation height width iter total time (s) iterations loss gain
multi_objective_0534ec01TERMINATED127.0.0.1:42659tanh 18.3209 8.1091 100 10.3653 99 1.95513 15.8543
multi_objective_d3a487a7TERMINATED127.0.0.1:42660relu -67.8896 2.58816 100 10.3682 99 3.58666 9.28286
multi_objective_f481c3dbTERMINATED127.0.0.1:42665relu 46.643919.5326 100 10.3677 9914.7158 287.438
multi_objective_74a41d72TERMINATED127.0.0.1:42666tanh -31.950811.413 100 10.3685 99-3.10735-35.4643
multi_objective_d673b1aeTERMINATED127.0.0.1:42695relu 83.6004 5.04972 100 10.3494 9918.5561 93.7034
multi_objective_25ddc340TERMINATED127.0.0.1:42701relu -81.7161 4.45303 100 10.382 99 2.05019 9.12955
multi_objective_f8554c17TERMINATED127.0.0.1:42702tanh 43.5854 6.84585 100 10.3638 99 4.50394 30.8333
multi_objective_a144e315TERMINATED127.0.0.1:42707tanh 39.807519.1985 100 10.3706 99 4.03309 77.4292
multi_objective_50540842TERMINATED127.0.0.1:42739relu 75.280511.4041 100 10.3529 9917.6158 200.893
multi_objective_f322a9e3TERMINATED127.0.0.1:42740relu -51.3587 5.31683 100 10.3756 99 5.05057 26.853
\n", "
\n", "
\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "searcher = OptunaSearch(metric=[\"loss\", \"gain\"], mode=[\"min\", \"max\"])\n", "algo = ConcurrencyLimiter(searcher, max_concurrent=4)\n", "\n", "tuner = tune.Tuner(\n", " multi_objective,\n", " tune_config=tune.TuneConfig(\n", " search_alg=algo,\n", " num_samples=num_samples,\n", " ),\n", " param_space=search_space\n", ")\n", "results = tuner.fit();" ] }, { "attachments": {}, "cell_type": "markdown", "id": "df42b8b3", "metadata": {}, "source": [ "Now there are two hyperparameter sets for the two objectives." ] }, { "cell_type": "code", "execution_count": 23, "id": "183fef1a", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best hyperparameters for loss found were: {'steps': 100, 'width': 11.41302483988651, 'height': -31.950786209072476, 'activation': 'tanh'}\n", "Best hyperparameters for gain found were: {'steps': 100, 'width': 19.532566002677832, 'height': 46.643925051045784, 'activation': 'relu'}\n" ] } ], "source": [ "print(\"Best hyperparameters for loss found were: \", results.get_best_result(\"loss\", \"min\").config)\n", "print(\"Best hyperparameters for gain found were: \", results.get_best_result(\"gain\", \"max\").config)" ] }, { "attachments": {}, "cell_type": "markdown", "id": "cdf4d49a", "metadata": {}, "source": [ "We can mix-and-match the use of initial hyperparameter evaluations, conditional search spaces via define-by-run functions, and multi-objective tasks. This is also true of scheduler usage, with the exception of multi-objective optimization-- schedulers typically rely on a single scalar score, rather than the two scores we use here: loss, gain." ] }, { "cell_type": "code", "execution_count": 24, "id": "a058fdb3", "metadata": { "tags": [ "remove-cell" ] }, "outputs": [], "source": [ "ray.shutdown()" ] } ], "metadata": { "kernelspec": { "display_name": "ray", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.16" }, "orphan": true }, "nbformat": 4, "nbformat_minor": 5 }