# Running Tune experiments with Skopt¶

In this tutorial we introduce Skopt, while running a simple Ray Tune experiment. Tune’s Search Algorithms integrate with Skopt and, as a result, allow you to seamlessly scale up a Skopt optimization process - without sacrificing performance.

Scikit-Optimize, or skopt, is a simple and efficient library to optimize expensive and noisy black-box functions, e.g. large-scale ML experiments. It implements several methods for sequential model-based optimization. Noteably, skopt does not perform gradient-based optimization, and instead uses computationally cheap surrogate models to approximate the expensive function. In this example we minimize a simple objective to briefly demonstrate the usage of Skopt with Ray Tune via SkOptSearch. It’s useful to keep in mind that despite the emphasis on machine learning experiments, Ray Tune optimizes any implicit or explicit objective. Here we assume scikit-opitmize==0.8.1 library is installed. To learn more, please refer to the Scikit-Optimize website.

Click below to see all the imports we need for this example. You can also launch directly into a Binder instance to run this notebook yourself. Just click on the rocket symbol at the top of the navigation.

import time
from typing import Dict, Optional, Any

import ray

import skopt
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.suggest.skopt import SkOptSearch


Let’s start by defining a simple evaluation function. Again, an explicit math formula is queried here for demonstration, yet in practice this is typically a black-box function– e.g. the performance results after training an ML model. We artificially sleep for a bit (0.1 seconds) to simulate a long-running ML experiment. This setup assumes that we’re running multiple steps of an experiment while tuning three hyperparameters, namely width, height, and activation.

def evaluate(step, width, height, activation):
time.sleep(0.1)
activation_boost = 10 if activation=="relu" else 0
return (0.1 + width * step / 100) ** (-1) + height * 0.1 + activation_boost


Next, our objective function to be optimized takes a Tune config, evaluates the score of your experiment in a training loop, and uses tune.report to report the score back to Tune.

def objective(config):
for step in range(config["steps"]):
score = evaluate(step, config["width"], config["height"], config["activation"])
tune.report(iterations=step, mean_loss=score)


Next we define a search space. The critical assumption is that the optimal hyperparamters live within this space. Yet, if the space is very large, then those hyperparameters may be difficult to find in a short amount of time.

search_space = {
"steps": 100,
"width": tune.uniform(0, 20),
"height": tune.uniform(-100, 100),
"activation": tune.choice(["relu", "tanh"]),
}


The search algorithm is instantiated from the SkOptSearch class. We also constrain the the number of concurrent trials to 4 with a ConcurrencyLimiter.

algo = SkOptSearch()
algo = ConcurrencyLimiter(algo, max_concurrent=4)


The number of samples is the number of hyperparameter combinations that will be tried out. This Tune run is set to 1000 samples. (you can decrease this if it takes too long on your machine).

num_samples = 1000


Finally, we run the experiment to "min"imize the “mean_loss” of the objective by searching search_config via algo, num_samples times. This previous sentence is fully characterizes the search problem we aim to solve. With this in mind, notice how efficient it is to execute tune.run().

analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="skopt_exp",
num_samples=num_samples,
config=search_space
)


We now have hyperparameters found to minimize the mean loss.

print("Best hyperparameters found were: ", analysis.best_config)


## Providing an initial set of hyperparameters¶

While defining the search algorithm, we may choose to provide an initial set of hyperparameters that we believe are especially promising or informative, and pass this information as a helpful starting point for the SkOptSearch object. We also can pass the known rewards for these initial params to save on unnecessary computation.

initial_params = [
{"width": 10, "height": 0, "activation": "relu"},
{"width": 15, "height": -20, "activation": "tanh"}
]
known_rewards = [-189, -1144]


Now the search_alg built using SkOptSearch takes points_to_evaluate.

algo = SkOptSearch(points_to_evaluate=initial_params)
algo = ConcurrencyLimiter(algo, max_concurrent=4)


And again run the experiment, this time with initial hyperparameter evaluations:

analysis = tune.run(
objective,
search_alg=algo,
metric="mean_loss",
mode="min",
name="skopt_exp_with_warmstart",
num_samples=num_samples,
config=search_space
)


And we again show the ideal hyperparameters.

print("Best hyperparameters found were: ", analysis.best_config)