# A Basic Tune Tutorial¶

This tutorial will walk you through the process of setting up Tune. Specifically, we’ll leverage early stopping and Bayesian Optimization (via HyperOpt) to optimize your PyTorch model.

Tip

If you have suggestions as to how to improve this tutorial, please let us know!

To run this example, you will need to install the following:



## Search Algorithms in Tune¶

In addition to TrialSchedulers, you can further optimize your hyperparameters by using an intelligent search technique like Bayesian Optimization. To do this, you can use a Tune Search Algorithm. Search Algorithms leverage optimization algorithms to intelligently navigate the given hyperparameter space.

Note that each library has a specific way of defining the search space.

from hyperopt import hp
from ray.tune.suggest.hyperopt import HyperOptSearch

space = {
"lr": hp.loguniform("lr", 1e-10, 0.1),
"momentum": hp.uniform("momentum", 0.1, 0.9),
}

hyperopt_search = HyperOptSearch(space, metric="mean_accuracy", mode="max")

analysis = tune.run(train_mnist, num_samples=10, search_alg=hyperopt_search)

# To enable GPUs, use this instead:
# analysis = tune.run(
#     train_mnist, config=search_space, resources_per_trial={'gpu': 1})



Note

Tune allows you to use some search algorithms in combination with different trial schedulers. See this page for more details.

You can evaluate best trained model using the Analysis object to retrieve the best model:

import os

df = analysis.results_df
logdir = analysis.get_best_logdir("mean_accuracy", mode="max")

model = ConvNet()